Psychology A Level

Overview – Research Methods

Research methods are how psychologists and scientists come up with and test their theories. The A level psychology syllabus covers several different types of studies and experiments used in psychology as well as how these studies are conducted and reported:

  • Types of psychological studies (including experiments , observations , self-reporting , and case studies )
  • Scientific processes (including the features of a study , how findings are reported , and the features of science in general )
  • Data handling and analysis (including descriptive statistics and different ways of presenting data ) and inferential testing

Note: Unlike all other sections across the 3 exam papers, research methods is worth 48 marks instead of 24. Not only that, the other sections often include a few research methods questions, so this topic is the most important on the syllabus!

types of research methods psychology a level

Example question: Design a matched pairs experiment the researchers could conduct to investigate differences in toy preferences between boys and girls. [12 marks]

Types of study

There are several different ways a psychologist can research the mind, including:

  • Experiments
  • Observation
  • Self-reporting

Case studies

Each of these methods has its strengths and weaknesses. Different methods may be better suited to different research studies.

Experimental method

The experimental method looks at how variables affect outcomes. A variable is anything that changes between two situations ( see below for the different types of variables ). For example, Bandura’s Bobo the doll experiment looked at how changing the variable of the role model’s behaviour affected how the child played.

Experimental designs

Experiments can be designed in different ways, such as:

  • Independent groups: Participants are divided into two groups. One group does the experiment with variable 1, the other group does the experiment with variable 2. Results are compared.
  • Repeated measures: Participants are not divided into groups. Instead, all participants do the experiment with variable 1, then afterwards the same participants do the experiment with variable 2. Results are compared.

A matched pairs design is another form of independent groups design. Participants are selected. Then, the researchers recruit another group of participants one-by-one to match the characteristics of each member of the original group. This provides two groups that are relevantly similar and controls for differences between groups that might skew results. The experiment is then conducted as a normal independent groups design.

Types of experiment

Laboratory vs. field experiment.

Experiments are carried out in two different types of settings:

  • E.g. Bandura’s Bobo the doll experiment or Asch’s conformity experiments
  • E.g. Bickman’s study of the effects of uniforms on obedience

Strengths of laboratory experiment over field experiment:

The controlled environment of a laboratory experiment minimises the risk of other variables outside the researchers’ control skewing the results of the trial, making it more clear what (if any) the causal effects of a variable are. Because the environment is tightly controlled, any changes in outcome must be a result of a change in the variable.

Weaknesses of laboratory experiment over field experiment:

However, the controlled nature of a laboratory experiment might reduce its ecological validity . Results obtained in an artificial environment might not translate to real-life. Further, participants may be influenced by demand characteristics : They know they are taking part in a test, and so behave how they think they’re expected to behave rather than how they would naturally behave.

Natural and quasi experiment

Natural experiments are where variables vary naturally. In other words, the researcher can’t or doesn’t manipulate the variables . There are two types of natural experiment:

  • E.g. studying the effect a change in drug laws (variable) has on addiction
  • E.g. studying differences between men (variable) and women (variable)

Observational method

The observational method looks at and examines behaviour. For example, Zimbardo’s prison study observed how participants behaved when given certain social roles.

Observational design

Behavioural categories.

An observational study will use behavioural categories to prioritise which behaviours are recorded and ensure the different observers are consistent in what they are looking for.

For example, a study of the effects of age and sex on stranger anxiety in infants might use the following behavioural categories to organise observational data:

A F 2 IS 2
B F 1 AS 4
C M 3 IS 1
D F 2 IS 2
E M 2 AS 6

Rather than writing complete descriptions of behaviours, the behaviours can be coded into categories. For example, IS = interacted with stranger, and AS = avoided stranger. Researchers can also create numerical ratings to categorise behaviour, like the anxiety rating example above.

Inter-observer reliability : In order for observations to produce reliable findings, it is important that observers all code behaviour in the same way. For example, researchers would have to make it very clear to the observers what the difference between a ‘3’ on the anxiety scale above would be compared to a ‘7’. This inter-observer reliability avoids subjective interpretations of the different observers skewing the findings.

Event and time sampling

Because behaviour is constant and varied, it may not be possible to record every single behaviour during the observation period. So, in addition to categorising behaviour , study designers will also decide when to record a behaviour:

  • Event sampling: Counting how many times the participant behaves in a certain way.
  • Time sampling: Recording participant behaviour at regular time intervals. For example, making notes of the participant’s behaviour after every 1 minute has passed.

Note: Don’t get event and time sampling confused with participant sampling , which is how researchers select participants to study from a population.

Types of observation

Naturalistic vs. controlled.

Observations can be made in either a naturalistic or a controlled setting:

  • E.g. setting up cameras in an office or school to observe how people interact in those environments
  • E.g. Ainsworth’s strange situation or Zimbardo’s prison study

Covert vs. overt

Observations can be either covert or overt :

  • E.g. setting up hidden cameras in an office
  • E.g. Zimbardo’s prison study

Participant vs. non-participant

In observational studies, the researcher/observer may or may not participate in the situation being observed:

  • E.g. in Zimbardo’s prison study , Zimbardo played the role of prison superintendent himself
  • E.g. in Bandura’s Bobo the doll experiment and Ainsworth’s strange situation , the observers did not interact with the children being observed

Self-report method

Self-report methods get participants to provide information about themselves. Information can be obtained via questionnaires or interviews .

Types of self-report

Questionnaires.

A questionnaire is a standardised list of questions that all participants in a study answer. For example, Hazan and Shaver used questionnaires to collate self-reported data from participants in order to identify correlations between attachment as infants and romantic attachment as adults.

Questions in a questionnaire can be either open or closed :

  • >8 hours
  • E.g. “How did you feel when you thought you were administering a lethal shock?” or “What do you look for in a romantic partner and why?”

Strengths of questionnaires:

  • Quantifiable: Closed questions provide quantifiable data in a consistent format, which enables to statistically analyse information in an objective way.
  • Replicability: Because questionnaires are standardised (i.e. pre-set, all participants answer the same questions), studies involving them can be easily replicated . This means the results can be confirmed by other researchers, strengthening certainty in the findings.

Weaknesses of questionnaires:

  • Biased samples: Questionnaires handed out to people at random will select for participants who actually have the time and are willing to complete the questionnaire. As such, the responses may be biased towards those of people who e.g. have a lot of spare time.
  • Dishonest answers: Participants may lie in their responses – particularly if the true answer is something they are embarrassed or ashamed of (e.g. on controversial topics or taboo topics like sex)
  • Misunderstanding/differences in interpretation: Different participants may interpret the same question differently. For example, the “are you religious?” example above could be interpreted by one person to mean they go to church every Sunday and pray daily, whereas another person may interpret religious to mean a vague belief in the supernatural.
  • Less detail: Interviews may be better suited for detailed information – especially on sensitive topics – than questionnaires. For example, participants are unlikely to write detailed descriptions of private experiences in a questionnaire handed to them on the street.

In an interview , participants are asked questions in person. For example, Bowlby interviewed 44 children when studying the effects of maternal deprivation.

Interviews can be either structured or unstructured :

  • Structured interview: Questions are standardised and pre-set. The interviewer asks all participants the same questions in the same order.
  • Unstructured interview: The interviewer discusses a topic with the participant in a less structured and more spontaneous way, pursuing avenues of discussion as they come up.

Interviews can also be a cross between the two – these are called semi-structured interviews .

Strengths of interviews:

  • More detail: Interviews – particularly unstructured interviews conducted by a skilled interviewer – enable researchers to delve deeper into topics of interest, for example by asking follow-up questions. Further, the personal touch of an interviewer may make participants more open to discussing personal or sensitive issues.
  • Replicability: Structured interviews are easily replicated because participants are all asked the same pre-set list of questions. This replicability means the results can be confirmed by other researchers, strengthening certainty in the findings.

Weaknesses of interviews:

  • Lack of quantifiable data: Although unstructured interviews enable researchers to delve deeper into interesting topics, this lack of structure may produce difficulties in comparing data between participants. For example, one interview may go down one avenue of discussion and another interview down a different avenue. This qualitative data may make objective or statistical analysis difficult.
  • Interviewer effects : The interviewer’s appearance or character may bias the participant’s answers. For example, a female participant may be less comfortable answering questions on sex asked by a male interviewer and and thus give different answers than if she were asked by a female interviewer.

Note: This topic is A level only, you don’t need to learn about case studies if you are taking the AS exam only.

Case studies are detailed investigations into an individual, a group of people, or an event. For example, the biopsychology page describes a case study of a young boy who had the left hemisphere of his brain removed and the effects this had on his language skills.

In a case study, researchers use many of the methods described above – observation , questionnaires , interviews – to gather data on a subject. However, because case studies are studies of a single subject, the data they provide is primarily qualitative rather than quantitative . This data is then used to build a case history of the subject. Researchers then interpret this case history to draw their conclusions.

Types of case study

Typical vs. unusual cases.

Most case studies focus on unusual individuals, groups, and events.

Longitudinal

Many case studies are longitudinal . This means they take place over an extended time period, with researchers checking in with the subject at various intervals. For example, the case study of the boy who had his left hemisphere removed collected data on the boy’s language skills at ages 2.5, 4, and 14 to see how he progressed.

Strengths of case studies:

  • Provides detailed qualitative data: Rather than focusing on one or two aspects of behaviour at a single point in time (e.g. in an experiment ), case studies produce detailed qualitative data.
  • Allows for investigation into issues that may be impractical or unethical to study otherwise. For example, it would be unethical to remove half a toddler’s brain just to experiment , but if such a procedure is medically necessary then researchers can use this opportunity to learn more about the brain.

Weaknesses of case studies:

  • Lack of scientific rigour: Because case studies are often single examples that cannot be replicated , the results may not be valid when applied to the general population.
  • Researcher bias: The small sample size of case studies also means researchers need to apply their own subjective interpretation when drawing conclusions from them. As such, these conclusions may be skewed by the researcher’s own bias and not be valid when applied more generally. This criticism is often directed at Freud’s psychoanalytic theory because it draws heavily on isolated case studies of individuals.

Scientific processes

This section looks at how science works more generally – in particular how scientific studies are organised and reported . It also covers ways of evaluating a scientific study.

Study features and design

Studies will usually have an aim . The aim of a study is a description of what the researchers are investigating and why . For example, “to investigate the effect of SSRIs on symptoms of depression” or “to understand the effect uniforms have on obedience to authority”.

Studies seek to test a hypothesis . The experimental/alternate hypothesis of a study is a testable prediction of what the researchers expect to happen.

  • E.g. “That SSRIs will reduce symptoms of depression” or “subjects are more likely to comply when orders are issued by someone wearing a uniform”.
  • E.g. “That SSRIs have no effect on symptoms on depression” or “subject conformity will be the same when orders are issued by someone wearing a uniform as when orders are issued by someone bot wearing a uniform”

Either the experimental/alternate hypothesis or the null hypothesis will be supported by the results of the experiment.

It’s often not possible or practical to conduct research on everyone your study is supposed to apply to. So, researchers use sampling to select participants for their study.

  • E.g. all humans, all women, all men, all children, etc.
  • E.g. 10,000 humans, 200 women from the USA, children at a certain school

For example, the target population (i.e. who the results apply to) of Asch’s conformity experiments is all humans – but Asch didn’t conduct the experiment on that many people! Instead, Asch recruited 123 males and generalised the findings from this sample to the rest of the population.

Researchers choose from different sampling techniques – each has strengths and weaknesses.

Sampling techniques

Random sampling.

The random sampling method involves selecting participants from a target population at random – such as by drawing names from a hat or using a computer program to select them. This method means each member of the population has an equal chance of being selected and thus is not subject to any bias.

Strengths of random sampling:

  • Unbiased: Selecting participants by random chance reduces the likelihood that researcher bias will skew the results of the study.
  • Representative: If participants are selected at random – particularly if the sample size is large – it is likely that the sample will be representative of the population as a whole. For example, if the ratio of men:women in a population is 50:50 and participants are selected at random, it is likely that the sample will also have a ratio of men to women that is 50:50.

Weaknesses of random sampling:

  • Impractical: It’s often impractical/impossible to include all members of a target population for selection. For example, it wouldn’t be feasible for a study on women to include the name of every woman on the planet for selection. But even if this was done, the randomly selected women may not agree to take part in the study anyway.

Systematic sampling

The systematic sampling method involves selecting participants from a target population by selecting them at pre-set intervals. For example, selecting every 50th person from a list, or every 7th, or whatever the interval is.

Strengths of systematic sampling:

  • Unbiased and representative: Like random sampling , selecting participants according to a numerical interval provides an objective means of selecting participants that prevents researcher bias being able to skew the sample. Further, because the sampling method is independent of any particular characteristic (besides the arbitrary characteristic of the participant’s order in the list) this sample is likely to be representative of the population as a whole.

Weaknesses of systematic sampling:

  • Unexpected bias: Some characteristics could occur more or less frequently at certain intervals, making a sample that is selected based on that interval biased. For example, houses tend to be have even numbers on one side of a road and odd numbers on the other. If one side of the road is more expensive than the other and you select every 4th house, say, then you will only select even numbers from one side of the road – and this sample may not be representative of the road as a whole.

Stratified sampling

The stratified sampling method involves dividing the population into relevant groups for study, working out what percentage of the population is in each group, and then randomly sampling the population according to these percentages.

For example, let’s say 20% of the population is aged 0-18, and 50% of the population is aged 19-65, and 30% of the population is aged >65. A stratified sample of 100 participants would randomly select 20x 0-18 year olds, 50x 19-65 year olds, and 30x people over 65.

Strengths of stratified sampling:

  • Representative: The stratification is deliberately designed to yield a sample that is representative of the population as a whole. You won’t get people with certain characteristics being over- or under-represented within the sample.
  • Unbiased: Because participants within each group are selected randomly , researcher bias is unable to skew who is included in the study.

Weaknesses of stratified sampling:

  • Requires knowledge of population breakdown: Researchers need to accurately gauge what percentage of the population falls into what group. If the researchers get these percentages wrong, the sample will be biased and some groups will be over- or under-represented.

Opportunity and volunteer sampling

The opportunity and volunteer sampling methods:

  • E.g. Approaching people in the street and asking them to complete a questionnaire.
  • E.g. Placing an advert online inviting people to complete a questionnaire.

Strengths of opportunity and volunteer sampling:

  • Quick and easy: Approaching participants ( opportunity sampling) or inviting participants ( volunteer sampling) is quick and straightforward. You don’t have to spend time compiling details of the target population (like in e.g. random or systematic sampling ), nor do you have to spend time dividing participants according to relevant categories (like in stratified sampling ).
  • May be the only option: With natural experiments – where a variable changes as a result of something outside the researchers’ control – opportunity sampling may be the only viable sampling method. For example, researchers couldn’t randomly sample 10 cities from all the cities in the world and change the drug laws in those cities to see the effects – they don’t have that kind of power. However, if a city is naturally changing its drug laws anyway, researchers could use opportunity sampling to study that city for research.

Weaknesses of opportunity and volunteer sampling:

  • Unrepresentative: The pool of participants will likely be biased towards certain kinds of people. For example, if you conduct opportunity sampling on a weekday at 10am, this sample will likely exclude people who are at work. Similarly, volunteer sampling is likely to exclude people who are too busy to take part in the study.

Independent vs. dependent variables

If the study involves an experiment , the researchers will alter an independent variable to measure its effects on a dependent variable :

  • E.g. In Bickman’s study of the effects of uniforms on obedience , the independent variable was the uniform of the person giving orders.
  • E.g. In Bickman’s study of the effects of uniforms on obedience , the dependent variable was how many people followed the orders.

Extraneous and confounding variables

In addition to the variables actually being investigated ( independent and dependent ), there may be additional (unwanted) variables in the experiment. These additional variables are called extraneous variables .

Researchers must control for extraneous variables to prevent them from skewing the results and leading to false conclusions. When extraneous variables are not properly controlled for they are known as confounding variables .

For example, if you’re studying the effect of caffeine on reaction times, it might make sense to conduct all experiments at the same time of day to prevent this extraneous variable from confounding the results. Reaction times change throughout the day and so if you test one group of subjects at 3pm and another group right before they go to bed, you may falsely conclude that the second group had slower reaction times.

Operationalisation of variables

Operationalisation of variables is where researchers clearly and measurably define the variables in their study.

For example, an experiment on the effects of sleep ( independent variable ) on anxiety ( dependent variable ) would need to clearly operationalise each variable. Sleep could be defined by number of hours spent in bed, but anxiety is a bit more abstract and so researchers would need to operationalise (i.e. define) anxiety such that it can be quantified in a measurable and objective way.

If variables are not properly operationalised, the experiment cannot be properly replicated , experimenters’ subjective interpretations may skew results, and the findings may not be valid .

Pilot studies

A pilot study is basically a practice run of the proposed research project. Researchers will use a small number of participants and run through the procedure with them. The purpose of this is to identify any problems or areas for improvement in the study design before conducting the research in full. A pilot study may also give an early indication of whether the results will be statistically significant .

For example, if a task is too easy for participants, or it’s too obvious what the real purpose of an experiment is, or questions in a questionnaire are ambiguous, then the results may not be valid . Conducting a pilot study first may save time and money as it enables researchers to identify and address such issues before conducting the full study on thousands of participants.

Study reporting

Features of a psychological report.

The report of a psychological study (research paper) typically contains the following sections in the following order:

  • Title: A short and clear description of the research.
  • Abstract: A summary of the research. This typically includes the aim and hypothesis , methods, results, and conclusion.
  • Introduction: Funnel technique: Broad overview of the context (e.g. current theories, previous studies, etc.) before focusing in on this particular study, why it was conducted, its aims and hypothesis .
  • Study design: This will explain what method was used (e.g. experiment or observation ), how the study was designed (e.g. independent groups or repeated measures ), and identification and operationalisation of variables .
  • Participants: A description of the target population to be studied, the sampling method , how many participants were included.
  • Equipment used: A description of any special equipment used in the study and how it was used.
  • Standardised procedure: A detailed step-by-step description of how the study was conducted. This allows for the study to be replicated by other researchers.
  • Controls : An explanation of how extraneous variables were controlled for so as to generate accurate results.
  • Results: A presentation of the key findings from the data collected. This is typically written summaries of the raw data ( descriptive statistics ), which may also be presented in tables , charts, graphs , etc. The raw data itself is typically included in appendices.
  • Discussion: An explanation of what the results mean and how they relate to the experimental hypothesis (supporting or contradicting it), any issues with how results were generated, how the results fit with other research, and suggestions for future research.
  • Conclusion: A short summary of the key findings from the study.
  • Book: Milgram, S., 2010. Obedience to Authority . 1st ed. Pinter & Martin.
  • Journal article: Bandura, A., Ross, D. and Ross, S., 1961. Transmission of Aggression through Imitation of Aggressive Models . The Journal of Abnormal and Social Psychology, 63(3), pp.575-582.
  • Appendices: This is where you put any supporting materials that are too detailed or long to include in the main report. For example, the raw data collected from a study, or the complete list of questions in a questionnaire .

Peer review

Peer review is a way of assessing the scientific credibility of a research paper before it is published in a scientific journal. The idea with peer review is to prevent false ideas and bad research from being accepted as fact.

It typically works as follows: The researchers submit their paper to the journal they want it to be published in, and the editor of that journal sends the paper to expert reviewers (i.e. psychologists who are experts in that area – the researchers’ ‘peers’) who evaluate the paper’s scientific validity. The reviewers may accept the paper as it is, accept it with a few changes, reject it and suggest revisions and resubmission at a later date, or reject it completely.

There are several different methods of peer review:

  • Open review: The researchers and the reviewers are known to each other.
  • Single-blind: The researchers do not know the names of the reviewers. This prevents the researchers from being able to influence the reviewer. This is the most common form of peer review.
  • Double-blind: The researchers do not know the names of the reviewers, and the reviewers do not know the names of the researchers. This additionally prevents the reviewer’s bias towards the researcher from influencing their decision whether to accept their paper or not.

Criticisms of peer review:

  • Bias: There are several ways peer review can be subject to bias. For example, academic research (particularly in niche areas) takes place among a fairly small circle of people who know each other and so these relationships may affect publication decisions. Further, many academics are funded by organisations and companies that may prefer certain ideas to be accepted as scientifically legitimate, and so this funding may produce conflicts of interest.
  • Doesn’t always prevent fraudulent/bad research from being published: There are many examples of fraudulent research passing peer review and being published (see this Wikipedia page for examples).
  • Prevents progress of new ideas: Reviewers of papers are typically older and established academics who have made their careers within the current scientific paradigm. As such, they may reject new or controversial ideas simply because they go against the current paradigm rather than because they are unscientific.
  • Plagiarism: In single-blind and double-blind peer reviews, the reviewer may use their anonymity to reject or delay a paper’s publication and steal the good ideas for themself.
  • Slow: Peer review can mean it takes months or even years between the researcher submitting a paper and its publication.

Study evaluation

In psychological studies, ethical issues are questions of what is morally right and wrong. An ethically-conducted study will protect the health and safety of the participants involved and uphold their dignity, privacy, and rights.

To provide guidance on this, the British Psychological Association has published a code of human research ethics :

  • Participants are told the project’s aims , the data being collected, and any risks associated with participation.
  • Participants have the right to withdraw or modify their consent at any time.
  • Researchers can use incentives (e.g. money) to encourage participation, but these incentives can’t be so big that they would compromise a participant’s freedom of choice.
  • Researchers must consider the participant’s ability to consent (e.g. age, mental ability, etc.)
  • Prior (general) consent: Informing participants that they will be deceived without telling them the nature of the deception. However, this may affect their behaviour as they try to guess the real nature of the study.
  • Retrospective consent: Informing participants that they were deceived after the study is completed and asking for their consent. The problem with this is that if they don’t consent then it’s too late.
  • Presumptive consent: Asking people who aren’t participating in the study if they would be willing to participate in the study. If these people would be willing to give consent, then it may be reasonable to assume that those taking part in the study would also give consent.
  • Confidentiality: Personal data obtained about participants should not be disclosed (unless the participant agreed to this in advance). Any data that is published will not be publicly identifiable as the participant’s.
  • Debriefing: Once data gathering is complete, researchers must explain all relevant details of the study to participants – especially if deception was involved. If a study might have harmed the individual (e.g. its purpose was to induce a negative mood), it is ethical for the debrief to address this harm (e.g. by inducing a happy mood) so that the participant does not leave the study in a worse state than when they entered.

Reliability

Study results are reliable if the same results can be consistently replicated under the same circumstances. If results are inconsistent then the study is unreliable.

Note: Just because a study is reliable, its results are not automatically valid . A broken tape measure may reliably (i.e. consistently) record a person’s height as 200m, but that doesn’t mean this measurement is accurate.

There are several ways researchers can assess a study’s reliability:

Test-retest

Test-retest is when you give the same test to the same person on two different occasions. If the results are the same or similar both times, this suggests they are reliable.

For example, if your study used scales to measure participants’ weight, you would expect the scales to record the same (or a very similar) weight for the same person in the morning as in the evening. If the scales said the person weighed 100kg more later that same day, the scales (and therefore the results of the study) would be unreliable.

Inter-observer

Inter-observer reliability is a way to test the reliability of observational studies .

For example, if your study required observers to assess participants’ anxiety levels, you would expect different observers to grade the same behaviour in the same way. If one observer rated a participant’s behaviour a 3 for anxiety, and another observer rated the exact same behaviour an 8, the results would be unreliable.

Inter-observer reliability can be assessed mathematically by looking for correlation between observers’ scores. Inter-observer reliability can be improved by setting clearly defined behavioural categories .

Study results are valid if they accurately measure what they are supposed to. There are several ways researchers can assess a study’s validity:

  • E.g. let’s say you come up with a new test to measure participants’ intelligence levels. If participants scoring highly on your test also scored highly on a standardised IQ test and vice versa, that would suggest your test has concurrent validity because participants’ scores are correlated with a known accurate test.
  • E.g. a study that measures participants’ intelligence levels by asking them when their birthday is would not have face validity. Getting participants to complete a standardised IQ test would have greater face validity.
  • E.g. let’s say your study was supposed to measure aggression levels in response to someone annoying. If the study was conducted in a lab and the participant knew they were taking part in a study, the results probably wouldn’t have much ecological validity because of the unrealistic environment.
  • E.g. a study conducted in 1920 that measured participants’ attitudes towards social issues may have low temporal validity because societal attitudes have changed since then.

Control of extraneous variables

There are several different types of extraneous variables that can reduce the validity of a study. A well-conducted psychological study will control for these extraneous variables so that they do not skew the results.

Demand characteristics

Demand characteristics are extraneous variables where the demands of a study make participants behave in ways they wouldn’t behave outside of the study. This reduces the study’s ecological validity .

For example, if a participant guesses the purpose of an experiment they are taking part in, they may try to please the researcher by behaving in the ‘right’ way rather than the way they would naturally. Alternatively, the participant might rebel against the study and deliberately try to sabotage it (e.g. by deliberately giving wrong answers).

In some study designs, researchers can control for demand characteristics using single- blind methods. For example, a drug trial could give half the participants the actual drug and the other half a placebo but not tell participants which treatment they received. This way, both groups will have equal demand characteristics and so any differences between them should be down to the drug itself.

Investigator effects

Investigator effects are another extraneous variable where the characteristics of the researcher affect the participant’s behaviour. Again, this reduces the study’s ecological validity .

Many characteristics – e.g. the researcher’s age, gender, accent, what they’re wearing – could potentially influence the participant’s responses. For example, in an interview about sex, females may feel less comfortable answering questions asked by a male interviewer and thus give different answers than if they were asked by a female. The researcher’s biases may also come across in their body language or tone of voice, affecting the participant’s responses.

In some study designs, researchers can control for demand characteristics using double- blind methods. In a double-blind drug trial, for example, neither the participants nor the researchers know which participants get the actual drug and which get the placebo. This way, the researcher is unable to give any clues (consciously or unconsciously) to participants that would affect their behaviour.

Participant variables

Participant variables are differences between participants. These can be controlled for by random allocation .

For example, in an experiment on the effect of caffeine on reaction times, participants would be randomly allocated into either the caffeine group or the non-caffeine group. A non -random allocation method, such as allocating caffeine to men and placebo to women, could mean variables in the allocation method (in this case gender) skew the results. When participants are randomly allocated, any extraneous variables (e.g. gender in this case) will be allocated evenly between each group and so not skew the results of one group more than the other.

Situational variables

Situational variables are the environment the experiment is conducted in. These can be controlled for by standardisation .

For example, all the tests of caffeine on reaction times would be conducted in the same room, at the same time of day, using the same equipment, and so on to prevent these features of the environment from skewing the results.

In a repeated measures experiment, researchers may use counterbalancing to control for the order in which tasks are completed.

For example, half of participants would do task A followed by task B, and the other half would do task B followed by task A.

Implications of psychological research for the economy

Psychological research often has practical applications in real life. The following are some examples of how psychological findings may affect the economy:

  • Attachment : Bowlby’s maternal deprivation hypothesis suggests that periods of extended separation between mother and child before age 3 are harmful to the child’s psychological development. And if mothers stay at home during this period, they can’t go out to work. However, some more recent research challenges Bowlby’s conclusions, suggesting that substitutes (e.g. the father , or nursery care) can care for the child, allowing the mother to go back to work sooner and remain economically active.
  • Depression : Psychological research has found effective therapies for treating depression, such as cognitive behavioural therapy and SSRIs. The benefits of such therapies – if they are effective – are likely to outweigh the costs because they enable the person to return to work and pay taxes, as well avoiding long-term costs to the health service.
  • OCD : Similar to above: Drug therapies (e.g. SSRIs) and behavioural approaches (e.g. CBT) may alleviate OCD symptoms, enabling OCD sufferers to return to work, pay taxes, and avoid reliance on healthcare services.
  • Memory : Public money is required to fund police investigations. Psychological tools, such as the cognitive interview , have improved the accuracy of eyewitness testimonies, which equates to more efficient use of police time and resources.

Features of science

Theory construction and hypothesis testing.

Science works by making empirical observations of the world, formulating hypotheses /theories that explain these observations, and repeatedly testing these hypotheses /theories via experimentation.

  • E.g. A tape measure provides a more objective measurement of something compared to a researcher’s guess. Similarly, a set of scales is a more objective way of determining which of two objects is heavier than a researcher lifting each up and giving their opinion.
  • E.g. Burger (2009) replicated Milgram’s experiments with similar results.
  • E.g. The hypothesis that “water boils at 100°c” could be falsified by an experiment where you heated water to 999°c and it didn’t boil. In contrast, “everything doubles in size every 10 seconds” could not be falsified by any experiment because whatever equipment you used to measure everything would also double in size.
  • Freud’s psychodynamic theories are often criticised for being unfalsifiable: There’s not really any observations that could disprove them because every possible behaviour (e.g. crying or not crying) could be explained as the result of some unconscious thought process.

Paradigm shifts

Philosopher Thomas Kuhn argues that science is not as unbiased and objective as it seems. Instead, the majority of scientists just accept the existing scientific theories (i.e. the existing paradigm) as true and then find data that supports these theories while ignoring/rejecting data that refutes them.

Rarely, though, minority voices are able to successfully challenge the existing paradigm and replace it with a new one. When this happens it is a paradigm shift . An example of a paradigm shift in science is that from Newtonian gravity to Einstein’s theory of general relativity.

Data handling and analysis

Types of data, quantitative vs. qualitative.

Data from studies can be quantitative or qualitative :

  • Quantitative: Numerical
  • Qualitative: Non-numerical

For example, some quantitative data in the Milgram experiment would be how many subjects delivered a lethal shock. In contrast, some qualitative data would be asking the subjects afterwards how they felt about delivering the lethal shock.

Strengths of quantitative data / weaknesses of qualitative data:

  • Can be compared mathematically and scientifically: Quantitative data enables researchers to mathematically and objectively analyse data. For example, mood ratings of 7 and 6 can be compared objectively, whereas qualitative assessments such as ‘sad’ and ‘unhappy’ are hard to compare scientifically.

Weaknesses of quantitative data / strengths of qualitative data:

  • Less detailed: In reducing data to numbers and narrow definitions, quantitative data may miss important details and context.

Content analysis

Although the detail of qualitative data may be valuable, this level of detail can also make it hard to objectively or mathematically analyse. Content analysis is a way of analysing qualitative data. The process is as follows:

  • E.g. A bunch of unstructured interviews on the topic of childhood
  • E.g. Discussion of traumatic events, happy memories, births, and deaths
  • E.g. Researchers listen to the unstructured interviews and count how often traumatic events are mentioned
  • Statistical analysis is carried out on this data

Primary vs. secondary

Researchers can produce primary data or use secondary data to achieve the research aims of their study:

  • Primary data: Original data collected for the study
  • Secondary data: Data from another study previously conducted

Meta-analysis

A meta-analysis is a study of studies. It involves taking several smaller studies within a certain research area and using statistics to identify similarities and trends within those studies to create a larger study.

We have looked at some examples of meta-analyses elsewhere in the course such as Van Ijzendoorn’s meta-analysis of several strange situation studies and Grootheest et al’s meta-analysis of twin studies on OCD .

A good meta-analysis is often more reliable than a regular study because it is based on a larger data set, and any issues with one single study will be balanced out by the other studies.

Descriptive statistics

Measures of central tendency: mean, median, mode.

Mean , median , and mode are measures of central tendency . In other words, they are ways of reducing large data sets into averages .

The mean is calculated by adding all the numbers in a set together and dividing the total by the number of numbers.

  • Example set: 22, 78, 3, 33, 90
  • 22+78+3+33+90=226
  • The mean is 45.2
  • Uses all data in the set.
  • Accurate: Provides a precise number based on all the data in a set.

Weaknesses:

  • E.g.: 1, 3, 2, 5, 9, 4, 913 <- the mean is 133.9, but the 913 could be a measurement error or something and thus the mean is not representative of the data set

The median is calculated by arranging all the numbers in a set from smallest to biggest and then finding the number in the middle. Note: If the total number of numbers is odd, you just pick the middle one. But if the total number of numbers is even, you take the mid-point between the two numbers in the middle.

  • Example set: 20, 66, 85, 45, 18, 13, 90, 28, 9
  • 9, 13, 18, 20, 28 , 45, 66, 85, 90
  • The median is 28
  • Won’t be skewed by freak scores (unlike the mean).
  • E.g.: 1, 1, 3 , 9865, 67914 <- 3 is not really representative of the larger numbers in the set.
  • Less accurate/sensitive than the mean.

The mode is calculated by counting which is the most commonly occurring number in a set.

  • Example set: 7, 7, 20 , 16, 1, 20 , 25, 16, 20 , 9
  • There are two 7’s, but three 20’s
  • The mode is 20
  • Makes more sense for presenting the central tendency in data sets with whole numbers. For example, the average number of limbs for a human being will have a mean of something like 3.99, but a mode of 4.
  • Does not use all the data in a set.
  • A data set may have more than one mode.

Measures of dispersion: Range and standard deviation

Range and standard deviation are measures of dispersion . In other words, they quantify how much scores in a data set vary .

The range is calculated by subtracting the smallest number in the data set from the largest number.

  • Example set: 59, 8, 7, 84, 9, 49, 14, 75, 88, 11
  • The largest number is 88
  • The smallest number is 7
  • The range is 81
  • Easy and quick to calculate: You just subtract one number from another
  • Accounts for freak scores (highest and lowest)
  • Can be skewed by freak scores: The difference between the biggest and smallest numbers can be skewed by a single anomalous result or error, which may give an exaggerated impression of the data distribution compared to standard deviation .
  • 4, 4, 5, 5, 5, 6, 6, 7, 19
  • 4, 16, 16, 17, 17, 17, 18, 19 19

Standard deviation

The standard deviation (σ) is a measure of how much numbers in a data set deviate from the mean (average). It is calculated as follows:

  • Example data set: 59, 79, 43, 42, 81, 100, 38, 54, 92, 62
  • Calculate the mean (65)
  • -6, 14, -22, -23, 16, 35, -27, -11, 27, -3
  • 36, 196, 484, 529, 256, 1225, 729, 121, 729, 9
  • 36+196+484+529+256+1225+729+121+729+9=4314
  • 4314/10=431.4
  • √431.4=20.77
  • The standard deviation is 20.77

Note: This method of standard deviation is based on the entire population. There is a slightly different method for calculating based on a sample where instead of dividing by the number of numbers in the second to last step, you divide by the number of numbers-1 (in this case 4314/9=479.333). This gives a standard deviation of 21.89.

  • Is less skewed by freak scores: Standard deviation measures the average difference from the mean and so is less likely to be skewed by a single freak score (compared to the range ).
  • Takes longer to calculate than the range .

Percentages

A percentage (%) describes how much out of 100 something occurs. It is calculated as follows:

  • Example: 63 out of a total of 82 participants passed the test
  • 63/82=0.768
  • 0.768*100=76.8
  • 76.8% of participants passed the test

Percentage change

To calculate a percentage change, work out the difference between the original number and the after number, divide that difference by the original number, then multiply the result by 100:

  • Example: He got 80 marks on the test but after studying he got 88 marks on the test
  • His test score increased by 10% after studying

Normal and skewed distributions

Normal distribution.

A data set that has a normal distribution will have the majority of scores on or near the mean average. A normal distribution is also symmetrical: There are an equal number of scores above the mean as below it. In a normal distribution, scores become rarer and rarer the more they deviate from the mean.

An example of a normal distribution is IQ scores. As you can see from the histogram below, there are as many IQ scores below the mean as there are above the mean :

statistical infrequency bell curve

When plotted on a histogram , data that follows a normal distribution will form a bell-shaped curve like the one above.

Skewed distribution

positive skew and negative skew histograms

Skewed distributions are caused by outliers: Freak scores that throw off the mean . Skewed distributions can be positive or negative :

  • Mean > Median > Mode
  • Mean < Median < Mode

Correlation

Correlation refers to how closely related two (or more) things are related. For example, hot weather and ice cream sales may be positively correlated: When hot weather goes up, so do ice cream sales.

Correlations are measured mathematically using correlation coefficients (r). A correlation coefficient will be anywhere between +1 and -1:

  • r=+1 means two things are perfectly positively correlated: When one goes up , so does the other by the same amount
  • r=-1 means two things perfectly negatively correlated: When one goes up , the other goes down by the same amount
  • r=0 means two things are not correlated at all: A change in one is totally independent of a change in the other

The following scattergrams illustrate various correlation coefficients:

correlation coefficient scatter graph examples

Presentation of data

table example

For example, the behavioural categories table above presents the raw data of each student in this made-up study. But in the results section, researchers might include another table that compares average anxiety rating scores for males and females.

Scattergrams

scattergram example

For example, each dot on the correlation scattergram opposite could represent a student. The x-axis could represent the number of hours the student studied, and the y-axis could represent the student’s test score.

eyewitness testimony loftus and palmer

For example, the results of Loftus and Palmer’s study into the effects of different leading questions on memory could be presented using the bar chart above. It’s not like there are categories in-between ‘contacted’ and ‘hit’, so the bars have gaps between them (unlike a histogram ).

A histogram is a bit like a bar chart but is used to illustrate continuous or interval data (rather than discrete data or whole numbers).

histogram example

Because the data on the x axis is continuous, there are no gaps between the bars.

line graph example

For example, the line graph above illustrates 3 different people’s progression in a strength training program over time.

pie chart example

For example, the frequency with which different attachment styles occurred in Ainsworth’s strange situation could be represented by the pie chart opposite.

Inferential testing

Probability and significance.

The point of inferential testing is to see whether a study’s results are statistically significant , i.e. whether any observed effects are as a result of whatever is being studied rather than just random chance.

For example, let’s say you are studying whether flipping a coin outdoors increases the likelihood of getting heads. You flip the coin 100 times and get 52 heads and 48 tails. Assuming a baseline expectation of 50:50, you might take these results to mean that flipping the coin outdoors does increase the likelihood of getting heads. However, from 100 coin flips, a ratio of 52:48 between heads and tails is not very significant and could have occurred due to luck. So, the probability that this difference in heads and tails is because you flipped the coin outside (rather than just luck) is low.

Probability is denoted by the symbol p . The lower the p value, the more statistically significant your results are. You can never get a p value of 0, though, so researchers will set a threshold at which point the results are considered statistically significant enough to reject the null hypothesis . In psychology, this threshold is usually <0.05, which means there is a less than 5% chance the observed effect is due to luck and a >95% chance it is a real effect.

Type 1 and type 2 errors

When interpreting statistical significance, there are two types of errors:

  • E.g. The p threshold is <0.05, but the researchers’ results are among the 5% of fluke outcomes that look significant but are just due to luck
  • E.g. The p threshold is set too low (e.g. <0.01), and the data falls short (e.g. p=<0.02)

Increasing the sample size reduces the likelihood of type 1 and type 2 errors.

Key maths skills made easy!

psychology research methods maths skills revision guide

Types of statistical test

Note: The inferential tests below are needed for A level only, if you are taking the AS exam , you only need to know the sign test .

There are several different types of inferential test in addition to the sign test . Which inferential test is best for a study will depend on the following three criteria:

  • Whether you are looking for a difference or a correlation
  • E.g. at the competition there were 8 runners, 12 swimmers, and 6 long jumpers (it’s not like there are in-between measurements between ‘swimmer’ and ‘runner’)
  • E.g. First, second, and third place in a race
  • E.g. Ranking your mood on a scale of 1-10
  • E.g. Weights in kg
  • E.g. Heights in cm
  • E.g. Times in seconds
  • Whether the experimental design is related (i.e. repeated measures ) or unrelated (i.e. independent groups )

The following table shows which inferential test is appropriate according to these criteria:

Mann Whitney
test test

Note: You won’t have to work out all these tests from scratch, but you may need to:

  • Say which of the statistical tests is appropriate (i.e. based on whether it’s a difference or correlation; whether the data is nominal, ordinal, or interval; and whether the data is related or unrelated).
  • Identify the critical value from a critical values table and use this to say whether a result (which will be given to you in the exam) is statistically significant.

The sign test

The sign test is a way to calculate the statistical significance of differences between related pairs (e.g. before and after in a repeated measures experiment ) of nominal data. If the observed value (s) is equal or less than the critical value (cv), the results are statistically significant.

Example: Let’s say we ran an experiment on 10 participants to see whether they prefer movie A or movie B .

A 3 6
B 3 3
C 5 6
D 4 6
E 2 3
F 3 7
G 5 3
H 7 8
I 2 6
J 8 5
  • n = 9 (because even though there are 10 participants, one participant had no change so we exclude them from our calculation)
  • In this case our experimental hypothesis is two-tailed: Participants may prefer movie A or movie B
  • (The null hypothesis is that participants like both movies equally)
  • In this case, let’s say it’s 0.1
  • The experimental hypothesis is two-tailed
  • So, in this example, our critical value (cv) is 1
  • In this example, there are 2 As, so our observed value (s) is 2
  • In this example, the observed value (2) is greater than the critical value (1) and so the results are not statistically significant. This means we must accept the null hypothesis and reject the experimental hypothesis .

<<<Biopsychology

  • AQA Model Answers Info
  • Purchase AQA Model Answers
  • Private Tuition
  • Info + Contact

PsychLogic

AQA A-LEVEL PSYCHOLOGY REVISION NOTES: RESEARCH METHODS

Sign up to the PsychLogic newsletter at the bottom of this page to download printable AQA A-level Psychology notes + AQA A-level Psychology revision guide + how to revise for A-level Psychology + more... The best way to revise Psychology A-level...

PSYCHOLOGY AQA  A-LEVEL UNIT 2 (7182)

The syllabus.

METHODS, TECHNIQUES & DESIGN

  • Primary and secondary data, and meta-analysis. Quantitative and qualitative data
  • Aims, operationalising variables, IV’s and DV’s
  • Hypotheses - directional and non-directional
  • Experimental design - independent groups, repeated measures, matched pairs
  • Validity – internal and external; extraneous and confounding variables; types of validity and improving validity
  • Control – random allocation, randomisation, standardisation
  • Demand characteristics and investigator effects
  • Reliability; types of reliability and improving reliability
  • Pilot studies
  • Correlation analysis – covariables and hypotheses, positive/negative correlations
  • Observational techniques – use of behavioural categories
  • Self-report techniques – design of questionnaires and interviews
  • Case studies
  • Content analysis
  • Thematic Analysis

PARTICIPANTS; ETHICS; FEATURES OF SCIENCE & SCIENTIFIC METHOD; THE ECONOMY

  • Selecting participants and sampling techniques
  • The British Psychological Society (BPS) code of ethics and ways of dealing with ethical issues
  • Forms and instructions
  • Peer review
  • Features of science: objectivity, empirical method, replicability and falsifiability
  • Paradigms and paradigm shifts
  • Reporting psychological investigations
  • The implications of psychological research for the economy

DESCRIPTIVE STATISTICS

  • Analysis and interpretation of quantitative data. Measures of central tendency - median, mean, mode. Calculating %’s. Measures of dispersion – range and standard deviation (SD)
  • Presentation and interpretation of quantitative data – graphs, histograms, bar charts, scattergrams and tables
  • Analysis and interpretation of correlational data; positive and negative correlations and the interpretation of correlation coefficients
  • Distributions: normal and skewed

INFERENTIAL STATISTICS

  • Factors affecting choice of statistics test: Spearman’s rho, Pearson’s r, Wilcoxon, Mann-Whitney, related t-test, unrelated t-test, Chi-Squared test
  • Levels of measurement – nominal, ordinal, interval
  • Procedures for statistics tests
  • Probability and significance: use of statistical tables and critical values in interpretation of significance; Type I and Type II errors
  • Introduction to statistical testing: the sign test

INTRODUCTION

Research Methods is concerned with how psychologists conduct research in an attempt to find evidence for theories . A theory without research support is really just someone’s reasoned opinion, not a proven fact .

Psychologists generally adopt a scientific approach to studying the mind and behaviour. The scientific method is based on empiricism – the belief that one can gain true knowledge of the world through the unbiased observation and measurement of observable, physical phenomena .

Laboratory experimentation is the method most associated with science as it involves the careful manipulation of variables to establish whether there are cause-effect relationships with other variables: for example, will an increase in testosterone cause an increase in aggression?

Psychologists face difficulties, however, in that they are studying highly complex, reactive creatures (humans) who tend not to behave in the predictable way that the objects of study of physics, chemistry and biology do. Equally, people put in an artificial laboratory situation who are aware they are being observed will tend not to behave in a normal, natural way. For this (and various other) reasons, psychologists have developed a variety of other means of research such as field and natural experiments , correlation studies and observations .

A debate exists within Psychology as to what extent it is desirable and/or appropriate to apply scientific methods to the study of humans. Many psychologists have argued that a strictly scientific approach reduces the complexity of human behaviour to an overly reductionist level and that human psychology can be better understood by more detailed and in depth methods such as questionnaires , interviews , case studies and content analysis .

Whereas biological approaches , behaviourism and cognitive psychology tend to favour quantitative , scientific , laboratory based approaches, psychodynamic and humanistic approaches argue for a more qualitative , descriptive approach.

The syllabus focuses on scientific approaches and how to design studies which produce valid (accurate/truthful) findings. There is also an emphasis on the statistical analysis of quantitative data .

>>>>>>>

METHODS, TECHNIQUES & DESIGN ( A-level Psychology revision notes)

PRIMARY AND SECONDARY DATA, AND META-ANALYSIS. QUANTITATIVE AND QUALITATIVE DATA

Psychologists conduct research in an attempt to find evidence for theories . Throughout the history of Psychology there has been an on-going debate in regard to what methods of investigation are appropriate to study the mind and behaviour. Whilst some favour a highly scientific, lab-based, experimental approach , others argue that these methods are inappropriate to the study of humans and support more in depth, less scientific, qualitative approaches such as interviews, case studies and observations.

  • Laboratory Experiments
  • Field Experiments
  • Natural & Quasi Experiments
  • Correlation Studies
  • Observational techniques
  • Self-report Questionnaires
  • Self-report Interviews
  • Case Studies
  • Content Analysis

Each of these methodologies uses different research techniques and has associated strengths and limitations .

Data (information produced from a research study) may be

  • Quantitative : numerical data that can be statistically analysed . This has the advantage of being more objective , quicker to gather and analyse, and can be presented in ways that are easily and quickly understandable. However, data can be superficial , and lacking depth and detail of participants’ subjective
  • Qualitative: written , richly detailed, descriptive accounts of what is being studied. This allows participants to express themselves freely. However, these methods are time consuming , can be costly to collect, difficult to analyse and suffer from problems of subjectivity .

Data gathered by psychologists can be

  • Primary – directly collected by the psychologist them self: e.g. questionnaires, interviews, observations, experiments.
  • Secondary – data collected by others: e.g. official statistics, the work of other psychologists, media products such as film or documentary.
  • Meta-analysis refers to when a psychologist draws together the findings and conclusions of many research studies into 1 single overall conclusion.

LABORATORY EXPERIMENTS ( AQA A-level Psychology revision notes)

Lab experiments are the most complex methodology in terms of their logic and design.

Any experiment begins with an aim .

The aim is a loose, general statement of what we intend to investigate: e.g. does alcohol affect driving performance?

Any experiment looks at the cause-effect relationship between 2 variables . A variable is any factor/thing that can be measured and changes. For example, intelligence, aggression, score on authoritarian personality scale, short-term memory capacity, etc. The two variables in the above example are alcohol and driving performance.

OPERATIONALISING VARIABLES

In psychological research we often want to find a way of expressing a variable numerically. This is referred to as operationalising a variable . Variables can be operationalised in many ways – for example,

  • Intelligence can be operationalised through an IQ test
  • Authoritarianism can be operationalised through a questionnaire
  • STM capacity can be operationalised through a task such as seeing how many digits a participant can remember at once.

INDEPENDENT & DEPENDENT VARIABLES

Of the 2 variables we are testing in an experiment, one is referred to as the Independent Variable (IV) and the other is referred to as the Dependent Variable (DV) .

In an experiment we test 2 conditions of the IV against the DV to see if there is a significant difference between how the 2 conditions of the IV affect the DV .

For example, we could set up an experiment to examine the cause-effect relationship between alcohol and driving performance . To do this we could recruit 100 volunteer participants , randomly split them into 2 groups of 50 , give the 1 st group a measure of alcohol and then let them drive on a driving simulator which would produce a score of x/20 for driving performance. The 2 nd group would be given no alcohol and allowed to drive on the simulator. Therefore, we would end up with 50 scores of x/20 for those who had driven after consuming alcohol, and 50 scores of x/20 for those who had driven and not consumed alcohol.

We could take the mean average score for each group and compare them. For example, we may find that those who had drunk alcohol scored a mean average of 10/20 whereas those who hadn’t consumed alcohol scored an average of 16/20. What we have done in this experiment is to test 2 conditions of the IV (alcohol and no alcohol) against the DV (driving performance) to see if there is a significant difference between how the 2 conditions of the IV affect the DV . If we find a significant difference between how the 2 conditions of the IV affect the DV we have found evidence that there is a cause-effect relationship between alcohol consumption and poor driving performance .  

AQA A LEVEL PSYCHOLOGY IV + DV

From the aim of our experiment we formulate our hypotheses.

A hypothesis is an exact, precise, testable prediction of what we expect to find in an experiment.

  • The Experimental/Alternative Hypothesis : a statement predicting that we will find a difference between how the 2 conditions of the IV affect the DV: e.g. ‘There will be a significant difference in driving performance between participants who have and have not consumed alcohol’ .

The above hypotheses are non-directional (or 2-tailed) hypotheses. This means that they do not make a prediction about the direction of results : i.e. they don’t predict that 1 of the groups is going to do better or worse than the other, they just predict that some kind of difference will occur.

However, if the experimenter strongly expects that results will go in a certain direction or previous research indicates this he may choose to apply a directional (or 1-tailed) hypothesis. This does make a prediction about the direction of results.

  • Experimental Hypothesis (1-tailed): ‘Participants who have consumed alcohol will show significantly poorer driving performance than participants who have not consumed alcohol’.

EXPERIMENTAL DESIGN

In any experiment we always have at least 2 groups of participants performing in at least 2 experimental conditions . There are several different ways in which we can allocate (put) participants to different conditions each with associated strengths and limitations .

1. Independent Groups Design . Participants are split into 2 groups , each group performing in 1 condition only .

The limitations of this design are

  • Participant Variables – the fact that individual differences between participants may affect the DV without us being aware of it and thus reduce the validity (accuracy) of our results. For example, we may find that participants in the alcohol condition are all excellent drivers with high alcohol tolerance, whilst participants in the no-alcohol condition are all poor drivers. Thus, the alcohol group may drive better and we might (falsely) conclude that alcohol improves driving performance. The problem of participant variables is reduced with a large sample and by randomly allocating participants to the 2 conditions.
  • It requires more participants than a repeated measures design .

The advantage of this design is that we will not encounter Order Effects (see below).

2. Repeated Measures Design . In this design all participants perform in the 1 st condition and then perform in the 2 nd condition . This allows us to directly compare participants’ performance across the 2 conditions.

  • Order Effects – when participants perform in condition 1 then condition 2 their performance in the 2 nd condition may either improve due to practise or get worse due to boredom or tiredness . In an attempt to overcome the problem of order effects we can use counterbalancing . This involves ½ the participants performing in condition 1 first, then condition 2, while the other ½ of the participants perform in condition 2 first, then condition 1. (This is thought to balance out the problem of order effects).
  • They may also work out the aim of the study and exhibit demand characteristics (see below).

The advantage of this design is that there is no possibility of participant variables threatening the validity of the study.

3. Matched Pairs Design : This design overcomes the problem of order effects and participant variables . Before the study begins we need to find participants who we can match with each other in terms of relevant characteristics such as age, gender, IQ, etc. The study then runs as an independent groups design , however, because each participant is matched with another participant in the other condition participant variables are less of a problem. The disadvantage of this design is it may be costly, time-consuming and difficult to find participants who match precisely .

It is highly important that experiments are well designed and run - otherwise findings may be inaccurate and lead us to draw false conclusions.

Validity generally refers to the truthfulness and accuracy of our findings.

We can distinguish between 2 types of validity.

  • INTERNAL/EXPERIMENTAL VALIDITY . This relates to whether we are really measuring what we think we are measuring. In any experiment we are trying to isolate the effect of the IV on the DV . Therefore, we need to ensure that no other unwanted, uncontrolled extraneous variables are affecting the DV without our knowledge. If an extraneous variable does affect our final results, we refer to as a confounding (i.e. confusing) variable.

AQA A LEVEL PSYCHOLOGY EXTRANEOUS VARIABLES

  • Ecological Validity . This relates to the problem of whether studies conducted under highly controlled, artificial, lab situations can produce findings that can be generalised to everyday life, or whether behaviour shown by participants will be artificial . For example, in the drink-driving study, participants use a driving simulator which is not really similar to driving in a real car on a real road.
  • Population Validity . If we only use small or biased/unrepresentative samples of participants, we may not be able to generalise findings to human behaviour in general.
  • Temporal Validity . If studies were conducted a long time ago, it can be argued that their findings are not relevant to the present day. For example, Asch’s conformity study was conducted in 1950’s America and it has been argued that the climate of America at this time was particularly conformist. Social change since the 50’s has meant that people are now far more non-conformist and independent.

CONTROL OF EXTRANEOUS VARIABLES; RANDOM ALLOCATION, STANDARDISATION

Extraneous variables are variables which the experimenter has failed to eliminate or control which are affecting the DV without us being aware of it. This threatens the validity of the study and the accuracy of our findings.

Extraneous variables must be carefully and systematically controlled . When designing an experiment, researchers should consider the following areas where extraneous variables may arise:

  • Random allocation/randomisation of participants to experimental conditions. To avoid any bias on the behalf of the researcher, participants should always be divided into groups randomly.
  • Standardisation of instructions and procedures. Participants should be given exactly the same instructions as each other and go through exactly the same procedures as each other to avoid differences in these acting as extraneous variables.
  • Participant variables : participants’ age, intelligence, personality and so on should be controlled across the different groups taking part. For example, in the above experiment: gender, driving experience, alcohol tolerance, body mass, etc. Participants could also be pre-tested and put into a matched-pairs design.
  • Situational variables : the experimental setting and surrounding environment must be controlled. This may include the time of day, the temperature or noise effects.
  • Order effects : participants may improve or get bored performing in different conditions. This can be controlled by using independent groups, matched participants or counter-balancing.
  • Demand Characteristics or Investigator Effects (see below).
  • A control group is a group of participants from who act as a baseline from which differences in the experimental group are measured. For example, we might compare improvements in mood scores for an experimental group who received therapy against a control group who none.

TYPES OF VALIDITY AND IMPROVING VALIDITY

It is highly important that experiments are well designed and run - otherwise findings may be inaccurate and lead us to draw false conclusions. If studies are to be regarded as credible, they must be valid .

The following techniques are used to check for/achieve/ensure validity .

  • Face validity is the extent to which a test is subjectively viewed as being able to measure the concept it claims to measure. In other words, a test can be said to have face validity if it "looks like" it is going to measure what it is supposed to measure.
  • Content Validity involves independent experts being asked to assess the validity/accuracy/appropriateness of instruments/tests used to measure a variable: e.g. agreeing that a particular IQ test is a valid measure of intelligence.
  • Concurrent Validity involves comparing the validity of a new test/measure against an established test/measure whose validity is already known and trusted. For example, the results of a new form of IQ test could be tested against an old, established IQ test. If scores correlate between the 2 tests they are said to have concurrent validity.

THE RELATIONSHIP BETWEEN RESEARCHER AND PARTICIPANTS

The fact that an experiment is a social situation means that behaviour may be affected by the presence of others (experimenter and other participants) and the expectations that participants have. Thus, we may not be getting a valid picture of how people behave in the real world.

  • Demand Characteristics refers to the fact that participants realise they are in an experiment and are being observed and tested. They may, therefore, alter their behaviour either to behave in ways they think the experimenter wants them to behave in or according to how they think they should behave. Participants may try to work out the aim of experiment and modify their behaviour accordingly. They may also show ‘social desirability bias’ – giving responses they believe are correct or moral, rather than answering honestly.
  • Investigator Effects refers to the fact that the experimenter may consciously or unconsciously gives hints or clues to research participants about how he wants or expects them to behave.

RELIABILITY

Reliability of a study refers to the issue of if we conduct the study again will the study produce similar results ? Clearly, if a study produces wildly varying results each time it is carried out there is either no real cause-effect relationship between the IV and the DV or the design of the study is invalid . Therefore, repeating a study confirms previous findings.

TYPES OF RELIABILITY

Inter-rater reliability

  • If a number of different observers are conducting the same observational study, we need to ensure the observers have inter-rater reliability . This means that observers are all defining behaviours and recording observations in the same way as each other . Thus, before the study begins observers should be trained through the use of, for example, a training video where they learn and are then tested on how to define and categorise behaviours in the same way as each other. We can assess inter-rater reliability by analysing the correlation between different observers score on the same behaviour. This will produce a correlation coefficient (see Correlation Studies and Spearman’s rho test): e.g. +0.96 = a strong positive correlation (they are rating things in the same way as each other).

Test-retest reliability

  • Reliability of a test (e.g. IQ test) or questionnaire can be tested by asking a participant to complete the test/questionnaire, then complete it again 2 weeks and a month later. If answers are similar over a period of time, then the test/questionnaire can be said to have reliability. We can assess test-retest reliability by analysing the correlation between different test scores. This will produce a correlation coefficient (see Correlation Studies): e.g. +0.96 = a strong positive correlation (high similarity between different test scores).

PILOT STUDIES

A pilot study is a small scale version of the main study that is conducted in advance to ensure

  • The procedures of the study will run smoothly
  • That equipment/tests are functioning accurately
  • That participants understand instructions
  • That all extraneous variables are controlled

  STRENGTHS OF LABORATORY EXPERIMENTS

  • High degree of control : experimenters can control all variables in the experiment. The IV and DV can be precisely defined (operationalised) and measured to assess cause-effect relationships - for example, the amount of caffeine given (IV) and reaction time (DV). This leads to greater accuracy and objectivity.
  • Replication : other researchers can easily repeat/replicate the experiment and check results for reliability . This is much easier in a controlled laboratory situation as opposed to a field experiment conducted in the real world.

LIMITATIONS OF LABORATORY EXPERIMENTS

  • Lack of ecological validity.
  • Demand characteristics.

(Explain both these points in full according to above notes.)

FIELD EXPERIMENTS ( Psychology A-level revision)

A field experiment is carried out in the real world rather than under artificial laboratory conditions. Participants are exposed to ‘set-up’ social situation to see how they respond. The ‘naïve’ participants are unaware they are taking part in an experiment.

STRENGTHS OF FIELD EXPERIMENTS

  • As the experiment is conducted in the real world levels of ecological validity are increased meaning that we can generalise behaviour to real-life behaviour.
  • As participants do not know they are involved in an experiment they will not show demand characteristics .

LIMITATIONS OF FIELD EXPERIMENTS

  • As the study is not conducted under tightly controlled laboratory conditions there is a greater chance that extraneous variables will influence the DV without the researcher being aware of this.
  • Field experiments often involve breaking ethical guidelines : e.g. failing to get participants consent, deceiving participants, failing to inform them of their right to withdraw or debriefing them, etc.

NATURAL & QUASI EXPERIMENTS ( A-level Psychology revision)

In a natural experiment the psychologist does not manipulate or ‘set up’ a situation to which participants are exposed to, rather they observe a change in the natural world (IV) and assess whether this has an effect on another variable (the DV) . For example, whether the introduction of TV into remote communities (IV = (i) no TV, and (ii) TV) and measuring whether this has had an effect on children’s’ aggressiveness (DV). A quasi-experiment is the same as a normal experiment but participants are not randomly allocated to conditions .

STRENGTHS OF NATURAL/QUASI EXPERIMENTS

  • As the experiment is conducted in the real world levels of ecological validity are increased.
  • In natural experiments, as participants do not know they are involved in an experiment they will not show demand characteristics.

LIMITATIONS OF NATURAL/QUASI EXPERIMENTS

  • Natural experiments may involve breaking ethical guidelines : e.g. failing to get participants consent to be observed, failing to inform them of their right to withdraw or debriefing them.

CORRELATION ANALYSIS ( AQA A-level Psychology revision)

 A correlation study involves measuring the relationship between 2 covariables : e.g. height and weight, stress and illness, ‘A’ Level point score and income aged 30, etc. (However, correlation studies only measure whether there is  some kind of relationship , not whether there is a cause-effect relationship .)

 The relationship may either be

AQA A LEVEL PSYCHOLOGY POSITIVE CORRELATION

To conduct a correlation study we need to operationalise the 2 co-variables and their relationship can then be plotted on a scattergram for each participant. The general pattern revealed should indicate whether the relationship is positive or negative and how weak or strong the relationship is. However, we can conduct statistical analysis of our data to produce a correlation coefficient : a number somewhere between -1 and +1 which will indicate the exact direction and strength of relationship between the 2 co-variables.

AQA A LEVEL PSYCHOLOGY CORRELATION COEFFICIENT

HYPOTHESES FOR CORRELATION STUDIES

Whereas hypotheses for experiments predict there will be a ‘difference’ between how the 2 conditions of the IV affect the DV, hypotheses for correlation studies predict there will be a ‘relationship’ between 2 co-variables.

Hypotheses can be directional or non-directional depending on whether or not past research indicates whether we should expect to find a relationship (either positive or negative).

  • 2-Tailed Experimental Hypothesis : ‘There will be a significant correlation between stress and illness’.
  • 1-Tailed Experimental Hypothesis : ‘There will be a significant positive correlation between stress and illness’. (This could also be predicting a negative correlation.)

  STRENGTHS OF CORRELATION STUDIES

  • Correlation studies allow us to assess the precise direction and strength of relationship between 2 co-variables using correlation coefficients (see above).
  • Correlation studies are a valuable preliminary (initial) research tool . They allow us to identify relationships between variables that we may then decide to investigate in more detail through experimentation.

LIMITATIONS OF CORRELATION STUDIES

  • Correlation studies only tell us that there is some kind of relationship between 2 variables, they do not tell us about cause-effect relationships , and thus they are a weaker methodology than lab experiments.
  • We may sometimes find a correlation between 2 variables by pure chance , even when no real relationship exists between the variables – thus they may be misleading. For example, there is an almost perfect negative correlation between Nigerian iron exports and the UK birth rate between 1870 and 1920 even though these factors are completely unrelated.

OBSERVATIONAL TECHNIQUES ( AQA A-level Psychology revision guide)

Observations simply involve observing behaviour in the natural environment .

Observations may be

  • Overt : the psychologist’s presence is made known to the group being studied. This may lead to demand characteristics and participants behaving in unnatural ways.
  • Covert : the psychologist’s presence is hidden . Either he appears as a normal member of the public or is his presence is concealed in some way (e.g. CCTV camera). Although this overcomes the problem of demand characteristics , there are ethical issues to do with deception, lack of consent and invasion of privacy.
  • Participant : the psychologist joins the group being studied. This may be covert or overt.
  • Non-Participant : the psychologist remains outside the group being studied. This may be covert or overt.

Observational studies can be conducted in real life situations (naturalistic observations) or in laboratories (which provide more control – controlled observations ). Behaviours observed can be recorded in a qualitative form or can be counted/quantified .

For example, we may wish to conduct an observational study of gender differences in aggressive behaviours amongst 5-7-year olds. A tally chart can be constructed to record observations and behavioural classifications/categories .

AQA A LEVEL PSYCHOLOGY OBSERVATION TALLY CHART

This chart allows us to make statistical statements about behaviours: e.g. boys punch 4 times more than girls do.

One way of recording behavioural categories is event sampling (as in the example above – recording the number of times a particular event occurs); the other is time sampling – recording what is occurring at certain time intervals: e.g. every minute.

If a number of different observers are conducting the same observational study, we need to ensure the observers have inter-rater reliability (see section of Reliability above).

STRENGTHS OF OBSERVATIONAL STUDIES

  • During covert observations there are high levels of ecological validity and no demand characteristics . Participants are unaware that they are being observed and they are in a natural environment – thus we are observing behaviour as it naturally occurs.
  • With participant observation the psychologist can question participants and get a much more in depth insight into the behaviours, beliefs and motivations of the group being studied . Thus, a much deeper, richer, descriptive picture of behaviour is produced.

  LIMITATIONS OF OBSERVATIONAL STUDIES

  • With covert observations ethical issues arise concerning invasion of privacy, lack of consent, deception and lack of right to withdraw.
  • With overt observations participants may exhibit demand characteristics and act in socially-appropriate or otherwise unnatural ways.

SELF-REPORT METHODS: QUESTIONNAIRE SURVEYS & INTERVIEWS ( A-level Psychology resources)

 The term self-report simply means that the participant is reporting on their own perception/view of themselves – either using a questionnaire or an interview .

 For either technique:

  • Social desirability bias may be an issue in that if a participant knows their answers will be read/heard by someone else they may say what they think is socially acceptable/desirable rather than the truth. To combat this, questionnaires can be kept anonymous and confidential.
  • Self-report studies are also subjective in that the individual’s perception of themselves may be quite different from how others view them.

QUESTIONNAIRES

Questionnaires can be:

  • Closed ended .

E.g. I intend to vote for Joe Biden.

AQA A LEVEL PSYCHOLOGY CLOSED-ENDED QUESTIONNAIRE

Closed ended questions allow us to produce quantitative data: e.g. statistical statements such as 45% of participants agreed.

  • Open ended .

Produce lengthier answers – richly descriptive, qualitative data.

E.g. Explain why you intend to vote for Joe Biden.

__________________________________________________

When constructing questionnaires, we must try to ensure that the questions we ask are clear , concise , non-ambiguous , and easily understandable, and will be interpreted by all participants in the same way as each other.

We may also want to check the reliability of the questionnaire through test-retest reliability . Open-ended questionnaires can be thematically analysed (see later section on this).

STRENGTHS OF QUESTIONNAIRES

  • Closed-ended questionnaires are capable of providing large amounts of information from large amounts of people fairly cheaply and quickly .
  • Closed-ended questions can be statistically analysed to allow us to make statements about %’s of people who hold certain beliefs, etc.
  • Open-ended questions allow us to gain an in depth insight into participants’ personal opinions and the motives that underlie behaviours and beliefs .

  LIMITATIONS OF QUESTIONNAIRES

  • If socially sensitive questions are asked participants may give socially-appropriate responses. E.g. if a questionnaire asks whether someone holds racist beliefs it is unlikely they will admit to this to a researcher. This can be overcome by making questionnaires anonymous and confidential.
  • Open-ended questions can be difficult to interpret and analyse as participants may give lengthy answers. This makes it hard to understand broad patterns and trends in participants’ beliefs and behaviours.

Interviews can be conducted with individuals or groups either face-to-face or telephone/internet. The respondent can describe their response in depth and detail (qualitative data) and say what they want to say rather than filling out pre-set answer choices (e.g. questionnaires). Interviews can be thematically analysed (see   later section on this).

Interview questions can be:

  • Structured : a pre-set list of questions is asked.
  • Unstructured : the interview progresses as more of an on-going conversation between interviewer and interviewee.

STRENGTHS OF INTERVIEWS

  • Interviews provide richly detailed qualitative descriptions of participants’ subjective (personal) understanding of their behaviour, beliefs and motivations .
  • With open-ended questions , interviewees may be able to suggest and shed light on further areas of research and interest relating to the topic they are being interviewed about.
  • Structured interviews allow all participants to be asked the same questions , making general patterns in answers easier to analyse and keep the interview limited to the subject matter the interviewer wants to cover.

LIMITATIONS OF INTERVIEWS

  • If socially sensitive questions are asked participants may give socially-appropriate responses. E.g. if an interviewer asks whether someone holds racist beliefs it is unlikely they will admit to this.
  • Open-ended questions can be difficult to interpret and analyse as participants may give lengthy, personal answers. This makes it harder to analyse broad patterns and trends in participants’ beliefs and behaviours.

CASE STUDIES ( AQA A-level Psychology resources)

These are longitudinal studies (conducted over a long period of time) which focus in great detail on an individual or a small group . They are often used in the field of psychopathology and child development, and may include a variety of methods such as unstructured interviews and observations .

STRENGTHS OF CASE STUDIES

  • Case studies provide richly detailed descriptions of participants’ subjective (personal) understanding of their behaviour, beliefs and motivations .
  • Case Studies usually follow the progress and changes an individual goes through over time.

LIMITATIONS OF CASE STUDIES

  • Case studies are associated with problems of subjectivity and personal interpretation on the behalf of the psychologist: e.g. the psychologist may be biased in their viewpoint and interpretation of events and behaviour: for example, with the case study of Little Hans, Freud was accused of interpreting Hans’ behaviour to make it support his theory of the Oedipus Complex. Thus, because case studies do not use controlled scientific methods of experimentation, they are thought to lack scientific objectivity and proof.
  • For the above reason, and for the fact they are only carried out on one individual, case studies suffer a lack of reliability and generalisability .

CONTENT ANALYSIS ( A-level Psychology notes)

This is a technique where researchers identify themes or behavioural categories and count how many times they occur (see   later section on thematic analysis) . It is often used with written or visual material such as interviews, open-ended questionnaires, diaries, magazines, films, etc.  A coding system of categories will be developed whereby we count certain times a particular piece of content arises.

For example, we might ask mothers with children who have just started primary school to keep a diary of their child’s response to this and then count how many times categories such as ‘child crying’, ‘child showing clingy behaviour’, ‘child showing anger to mother’ occur.

STRENGTHS OF CONTENT ANALYSIS

  • It allows qualitative data (writing or visual material) to be put into a quantitative form (counting behaviours) , so that statistical analysis can take place and data can be represented in tables and graphs.

LIMITATIONS OF CONTENT ANALYSIS

  • Constructing a coding system involves the risk of an investigator imposing their own meaning on the data. The investigator might choose coding categories they think are important and overlook categories which actually are important. Thus, there may be problems of subjectivity and personal bias .

THEMATIC ANALYSIS ( AQA A-level Psychology notes)

  Interviews, open-ended questionnaires and content analysis (all qualitative research techniques) can be analysed in terms of themes which occur in the content of responses given by participants.  We can count these themes to produce quantitative data . For example, if we interviewed adults who had experienced maternal deprivation as an infant we could analyse what major themes occurred in interviews (e.g. feelings of loss, desire for love, etc.) and count how many times these themes occurred.

STRENGTHS OF THEMATIC ANALYSIS

  • We can turn complex qualitative data into quantitative data which can then be statistically analysed. For example, 65% of participants referred to feelings of loss in their interviews.

LIMITATIONS OF THEMATIC ANALYSIS

  • If a number of researchers are conducting thematic analysis on the same data they may interpret and count themes in a different way to each other which would lead to a lack of reliability. (This could be overcome through testing for inter-rater reliability.)

PARTICIPANTS & SAMPLING ( A-level Psychology revision notes)

It is important to select participants carefully when conducting research to ensure the study has population validity (see section on Validity above).

The term population refers to all the people within a certain category whom we wish to study: e.g. all schizophrenics, all 5-11 year olds, all pregnant women, etc. From this population we draw a smaller sample . Ideally, we want our sample to be fairly large and to be representative of the population as a whole (i.e. a good cross-section in terms of age, gender, ethnicity, etc.)

With a large , representative, random sample of participants we should be able to generalise (apply) our findings to the population as a whole (i.e. say that what is true of our sample is true of the population as a whole).

There a number of different sampling methods we can employ to select participants each with its own advantages and disadvantages.

  • Random sampling . The sample is randomly selected from the population: e.g. picking names at random out of a hat. Although this method is truly random it does not guarantee a representative sample .
  • Volunteer (self-selecting) sampling . Participants respond to an advert placed by the researcher: e.g. Milgram’s obedience study. This method is not random and doesn’t guarantee a representative sample as only certain types of people are likely to volunteer. However, volunteers are likely to make motivated and cooperative participants in research.
  • Opportunity sampling . Potential participants are approached by the researcher and asked whether they would be willing to take part in a study. This method is not random and doesn’t guarantee a representative sample as only certain types of people are likely to agree to take part. However, those who do are likely to make motivated and cooperative participants in research.
  • Systematic sampling . Taking every ‘nth’ person on a list: e.g. every 10 th person on a school register. Not random or guaranteed to be representative .
  • Stratified sampling . The population is assessed for what proportion of particular characteristics it contains (e.g. age, gender, ethnicity, social class, etc.) and representative numbers of participants possessing these characteristics are randomly sampled to form the sample.

For example, a school population of 1000 students has 40% boys and 60% girls, and 50% of all students are below the age of 16 and 50% are 16 +.

If we wanted a stratified sample of 100 students we would select

  • 40 boys (40% of all students) and 60 girls (60% of all students)
  • 20 boys below the age of 16 (50% of the 20 boys)
  • 20 boys above the age of 16 (50% of the 20 boys)
  • 30 girls below the age of 16 (50% of the 20 girls)
  • 30 girls above the age of 16 (50% of the 20 girls)

AQA A LEVEL PSYCHOLOGY STRATIFIED SAMPLING

Stratified sampling is truly representative and random.

ETHICAL ISSUES AND WAYS OF DEALING WITH THEM ( AQA A-level Psychology revision notes)

The British Psychological Society (BPS) publish ethical guidelines which psychologists are supposed to follow when planning and conducting research.

DECEPTION AND INFORMED CONSENT  

Participants should not be deceived (lied to) or involved in experiments unless they have agreed to take part. One way of dealing with this is to make sure that the participant is told precisely what will happen in the experiment before requesting that he or she give voluntary informed consent to take part. In reality, many experiments require some level of deception to avoid demand characteristics, hence it is often difficult to receive fully informed consent.

For example, Milgram got consent to take part in an experiment, but not informed consent as participants did not know the true aim of the study.

Dealing with Deception and Lack of Informed Consent

  • At the end of the experiment participants should be informed about the aims, findings and conclusions of the investigation and the researcher should take steps to reduce any distress that may have been caused by the experiment. This may be in the form of counselling . They should also be asked if they have any questions.
  • Presumptive Consent . The general public are surveyed and asked whether they believe that the breaking of ethical guidelines in a particular study is justified or not . This solution is often used in relation to experiments where participants cannot be asked for consent as the study requires them to remain naïve: e.g. field experiments such as Hofling.
  • Prior General Consent . In this proposed solution, people volunteer to take part in research at some point in the future . Thus, they serve as a pool of participants who may be used at some point in the future.
  • Retrospective consent involves asking the participants for consent after they have participated in the study.
  • In the case of young children or the mentally ill , parents or guardians can provide consent if they judge a procedure is in the client’s best interests: e.g. whether a child with ADD should be prescribed a drug. Approval could also be obtained after consulting professional colleagues: e.g. psychiatrists debating whether a depressed patient would benefit from a drug treatment.

RIGHT TO WITHDRAW

Participants should have the right to withdraw from an experiment at any time.

They should be informed of this right in the standard briefing instructions given to them before the experiment commences. They have the right to insist that any data they have provided during the experiment should be destroyed.

PROTECTION FROM PHYSICAL AND PSYCHOLOGICAL HARM

Participants should be exposed to no more risk than they would encounter in their normal lives. They should also be protected from any kind of psychological harm such as stress, embarrassment or damage to their self-esteem .  If participants are showing signs of distress they should be reminded of their right to withdraw .

CONFIDENTIALITY

Information about participants’ identities should not be revealed and can be kept confidential by ensuring participants’ identities remain anonymous and confidential. Freud, for example, gave his clients pseudonyms: e.g. Little Hans.

FORMS & INSTRUCTIONS ( Psychology A-level revision)

CONSENT FORM

If asked to write a consent form, to get full marks you must provide sufficient information on both ethical and methodological issues for participants to make an informed decision. You must also write as it would be read out to participants.

The form should contain

  • The purpose of the study
  • The length of time required of the participants
  • Details of any parts of the study that participants might find uncomfortable
  • Details about what will be required of them, and what they will have to do
  • There is no pressure to take part in the study at all
  • Right to withdraw (they can leave at any time, without giving a reason, keep any money they have been paid, and any data collected on them will be destroyed)
  • Reassurance about protection from harm
  • Reassurance about confidentiality of the data
  • They should feel free to ask the researcher any questions at any time
  • They will receive a full debrief at the end of the programme

STANDARDISED INSTRUCTION FORM FOR PARTICIPANTS

You need to use the details in the description of the study to write an appropriate set of instructions for participants. The instructions should be clear, concise, use formal language and be as straightforward possible. They must:

  • Explain the procedures of this study relevant to participants.
  • Include a check of understanding of instructions.

(This is not a consent form so references to ethical issues are not necessary.)

PEER REVIEW ( A-level Psychology revision)

Peer review is the process by which psychological research papers are subjected to independent scrutiny (close examination) by other psychologists working in a similar field who consider the research in terms of its validity and significance . Such people are generally unpaid . Peer review happens before research is published.

Peer review is an important part of this process because it provides a way of checking the validity of the research, making a judgement about the credibility (believability) of the research, and assessing the quality and appropriateness of the design and methodology .  It is a means of prevent incorrect data entering the public domain. This is important to ensure that any funding is being spent correctly .

Peers are also in a position to judge the importance or significance of the research in a wider context .  They can also assess how original the work is and whether it refers to relevant research by other psychologists.  They can then make a recommendation as to whether the research paper should be published in its original form, rejected or revised in some way.  This peer review process helps to ensure that any research paper published in a well-respected journal can be taken seriously by fellow researchers and the public. 

MAJOR FEATURES OF THE SCIENTIFIC METHOD ( AQA A-level Psychology revision)

Science is the unbiased observation and measurement of the natural world. It is the only tool humanity has developed for establishing factual truths about the world. Science allows us to establish the laws of physical world and from this knowledge create technology .

Since the 1700’s the scientific method has been developed, scrutinised and refined.

Major features of the scientific methods are

  • Empiricism – Information is gained through direct observation or experiment on physically observable and measurable phenomena rather than by reasoned argument, unfounded beliefs, faith or superstition.
  • Objectivity – Scientists should strive to be unbiased and non-interpretative in their observations and measurements. Prior expectations and preconceptions should be put aside. Subjective can be thought of as biased, personal and interpretive.
  • Replicability – One way to demonstrate the validity of any observation or experiment is to repeat it. If the outcome is the same, this confirms the truth of the original results, especially if the observations have been made by a different person. In order to achieve such replication it is important for scientists to record their methods carefully so that the same procedures can be followed in the future.
  • Control – Scientists seek to demonstrate causal relationships between variables. The experimental method is the only way to do this – where we vary one factor (the independent variable) and observe its effect on a dependent variable. In order for this to be a ‘fair test’ all other conditions must be kept the same, i.e. controlled . This allows us to establish the cause-effect relationships which underlie the laws of nature.
  • Theory construction – One aim of science is to record facts, but an additional aim is to use these facts to construct theories to help us understand and predict the natural world. A theory is a collection of general principles that explain observations and facts . Theories should be based a sound body of valid and reliable scientific study.
  • Hypothesis Testing – A good theory must be able to generate testable hypotheses . Popper developed the concept of falsification – the only way to really prove a theory correct is to disprove it: if it can’t be disproved it must be correct.

PARADIGMS AND PARADIGM SHIFTS ( AQA A-level Psychology revision guide)

A paradigm refers to the accepted and approved of ways of thinking, understanding, theorising and researching that exist and are shared within any one particular science. For example, biologists all tend to work within a paradigm where they accept basic concepts (evolution and Darwinian theory) as true and agree on how biology should be studied (scientific experimentation).

Psychology is often described as pre-paradigmatic as there is no complete, shared agreement between psychologists about how they should understand and explain human behaviour or what the best methods to study behaviour are. Psychology encompasses a number of conflicting approaches (e.g. behaviourism, biological, cognitive, psychodynamic, evolutionary, etc.) which disagree over what the major influences are on behaviour and what methods should be employed to study behaviour.

A paradigm shift occurs when there is a fundamental change in how scientists in a particular field understand and research subject matter due to evidence proving that the previous paradigm was inadequate/incorrect in some way. For example, in the field of physics, Newton’s laws were the dominant paradigm from the 18 th to early 20 th century before the work of Einstein resulted in a paradigm shift in the way in which physicists understood the physical laws of the natural world.

CONVENTIONS FOR REPORTING PSYCHOLOGICAL INVESTIGATIONS ( A-level Psychology resources)

Psychological investigations are written up/reported in the same way by all psychologists.

Abstract – A summary of the study covering the aims/hypothesis, method/procedures, results and conclusions. Allows a reader to gain a quick overall understanding of a study.

Introduction/Aim/Hypotheses – What the researchers intend to investigate. This often includes a review of previous research (theories and studies), explaining why the researchers intend to conduct this particular study. The researchers may state their research predictions and/or a hypothesis or hypotheses.

Method – A detailed description of what the researchers did , providing enough information for replication of the study. Included in this section is:

  • Information about the participants (how they were selected , how many were used, and the experimental design )
  • The independent and dependent variables
  • The testing environment
  • Materials used
  • Procedures used to collect data
  • Any instructions given to participants before (the brief ) and afterwards (the debrief )

For full marks, the method section should be written clearly , succinctly and in such a way that the study would be replicable . It should be set out in a conventional reporting style, possibly under appropriate headings . The important factor here is whether the study could be replicated.

Results – This section contains statistical data including descriptive statistics (tables, averages and graphs) and inferential statistics (the use of statistical tests to determine how significant the results are).

If you are asked to outline and discuss the results of a study mention the following points

  • Write the results out clearly in words: e.g. ‘the mean number of objects remembered for participants listening to music was seven, but for those not listening to music was nine’.
  • Refer to the standard deviation or range and explain what they mean, e.g. ‘those listening to music had a higher standard deviation than those not listening to music, meaning that their scores varied more around the mean. So there were more individual differences in participants’ memories when listening to music.’
  • Say whether the results were significant and how you know this (refer to the OV, CV and level of significance), and what it means if they were.
  • Discuss issues of validity
  • Discuss issues of reliability
  • The researchers offer explanations of the behaviours they observed and might also consider the implications of the results (how it can be applied to the real world) and make suggestions for future research.
  • The researchers must consider their work critically, and evaluate it in terms of validity, reliability, any short-comings or criticisms, etc.
  • Discuss how their research relates to the background research discussed in their introduction.

THE IMPLICATIONS OF PSYCHOLOGICAL RESEARCH FOR THE ECONOMY ( AQA A-level Psychology resources)

Although it is difficult to quantify how much psychology contributes to the economy, Psychology university departments receive over £50 million in research grants annually.

Psychological research is used in diverse fields such as medicine, psychiatry, therapy, social work, childcare, advertising, marketing, business, forensic in crime, the army, education, etc.

Apart from direct benefits, Psychology indirectly contributes to the economy: for example, in the UK, 40% of people claiming incapacity benefits are doing so due to anxiety or depression, therefore, psychotherapy may assist the long-term unemployed in returning to work which causes increased tax revenue.

Psychology may also assist in finding solutions to wider social problems relating to crime, aggression, child abuse, etc. This could contribute to the economy by reducing levels of crime (theft and damage to properties), reducing prison population (paid for by the tax-payer) and increased taxation (people working rather than being in prison).

DESCRIPTIVE STATISTICS ( A-level Psychology notes)

Once a study has been conducted that produces quantitative data , patterns and trends can be simply analysed using some of the following techniques.

MEASURES OF CENTRAL TENDENCY

This refers to the 3 forms of average – Mean, Median and Mode – which tell us about the average within a set of data.

For example, a set of scores are produced in a memory test:

5, 7, 8, 8, 10, 11, 14, 15, 45

Add all scores and divide by total number of scores:  123 divided by 9 = 13.67

  • An advantage of the mean is that it is the truest form of average because it uses all scores within a set of data.
  • A disadvantage is that the mean may be artificially inflated or deflated by extreme scores (outliers) in a set of data (in such a case we can say that the data is skewed ). In the above example the extreme score of 45 artificially inflates the mean to an unrealistically high level .

The median is the middle score in a set of ranked (put in order from low to high) data.

  • An advantage of the mode is that It is not affected by extreme scores ( outliers ).
  • A disadvantage is that the Mode can be altered a lot by small changes in a set of data.

E.g. 2, 4, 4, 5, 9, 15, 16 Median = 5 (Take mean average if 2 numbers in middle).

       2, 4, 5, 9, 15, 16, 17 Median = 9 (Take mean average if 2 numbers in middle).

The most frequently occurring score in a set of data.

  • A disadvantage is that the Mode can be altered a lot by small changes in a set of data. Also, set of scores may have no mode value .

E.g.  2, 2, 4, 5, 9, 15, 16   Mode = 2

         2, 3, 4, 5, 9, 16, 16 Mode = 16

CALCULATING %’s

To calculate how much 1 number is as a percentage of another number divide the 1 st number by the 2 nd and multiple by 100.

For example, if Bob earns £26,060 a year and Nicola earns £137,540 then 

26,060/137,540 x 100 = 18.94

Therefore, Bob earns 18.94% of Nicola’s salary.

MEASURES OF DISPERSION

These tell us about the ‘spread’/‘dispersion’/’variability’ within a set of scores – the range and the standard deviation (SD).

This simply tells us about the range of scores in a set of data . The range is calculated by taking the highest score and subtracting the lowest score.

THE STANDARD DEVIATION (SD)

The standard deviation tells us about the amount of variability from the mean .

For example, 2 classes of students with 2 different psychology teachers gained the following % scores in an end of year test.

GROUP 1: 18, 24, 31, 46, 55, 64, 79, 82, 90, 98.  Mean = 59

GROUP 2: 49, 52, 54, 57, 68, 60, 62, 64, 66, 68.  Mean = 60

Although the 2 groups have very similar mean scores, GROUP 1 have a much larger SD – there is a lot of variability from the mean whereas there is little variation from the mean in GROUP 2.

The SD is a stronger measure of dispersion than the range because

  • The SD is a measure of dispersion that is less easily distorted by a single extreme score .
  • The SD takes account of the distance of all the scores from the mean.
  • The SD d oes not just measure the distance between the highest score and the lowest score .

DISPLAYS OF DATA ( AQA A-level Psychology notes)

Quantitative data can be plotted on a variety of graphs and charts.

GRAPHS are used to display continuous scores ( ordinal data : see Inferential Statistics below). For example, to record participants scores in a memory test (x/20).

AQA A LEVEL PSYCHOLOGY GRAPHS

HISTOGRAMS are graphs converted to show interval scores (rather than continuous ones). (See Inferential Statistics below.)   

AQA A LEVEL PSYCHOLOGY HISTOGRAMS

BAR CHARTS are not used to display scores - rather they display categories of information ( nominal data : see Inferential Statistics below). For example, number of participants in a particular category such as: favourite colour, borough of London lived in, participants studied at A Level, etc.

AQA A LEVEL PSYCHOLOGY BAR CHARTS

Note: whereas histogram bars join because they display continuous sets of scores, bar chart bars are separate as they show separate categories of information. 

SCATTERGRAMS are used to display data from correlation studies (see previous notes on Correlation Studies).

AQA A LEVEL PSYCHOLOGY SCATTERGRAMS

DISTRIBUTIONS: NORMAL AND SKEWED DISTRIBUTIONS; CHARACTERISTICS OF NORMAL AND SKEWED DISTRIBUTIONS ( A-level Psychology revision notes)

Many characteristics of populations follow a normal distribution: e.g. height, weight, shoe size, etc.

IQ scores are show a ‘normal’ distribution as below: i.e. most scores cluster around the mean average and as scores decrease or increase in either direction, fewer and fewer people possess these high or low scores. 68% of the population have an IQ between 85 and 115, only 2% of the population have an IQ between 130 and 145.

AQA A LEVEL PSYCHOLOGY POSITIVE + NEGATIVE SKEW DISTRIBUTIONS

However, distributions of characteristics in populations may be ‘skewed’ ( distorted in one direction of another). For example, salary in the UK is positively skewed : i.e. a small % of the population earn a very large salary. The IQs of children at a school for the gifted would be negatively skewed (i.e. few with a low IQ, lots with a high IQ).

AQA A LEVEL PSYCHOLOGY POSITIVE + NEGATIVE SKEW DISTRIBUTIONS

INFERENTIAL STATISTICS ( AQA A-level Psychology revision notes)

Although quantitative data can be analysed in fairly simple ways using measures of central tendency and dispersion, psychologists and scientists employ more complex statistical techniques to analyse results.

Experiments and correlation studies involve assessing whether

  • there is a significant difference between how the 2 conditions of the IV affect the DV
  • there is a significant correlation between 2 co-variables.

The term ‘significant’ can be thought of as referring to whether there is a real, interesting and important difference or correlation between variables.

For example, in the drink-driving study we may find a mean average score of 16/20 for the sober group and 9/20 for the alcohol group – clearly this is an important ‘significant’ difference. On the other hand if the scores were 14/20 and 11/20 we would be less sure if there was a real ‘significant’ difference between the groups.

At a basic level, statistical analysis is a tool to assess whether we have or have not found a significant difference or correlation in a study.

There are a number of different statistical tests that can be used to analyse data. Which test is appropriate to use is decided by

  • Whether the study is an experiment or a correlation study
  • Whether the study’s design is an independent groups design or a repeated measures design
  • Whether data is at the ordinal , nominal, interval or ratio level (see below)

AQA A LEVEL PSYCHOLOGY STATISTICS TEST

LEVELS OF DATA

Quantitative data comes in different forms/types.

  • Ordinal Data – scores which can be ranked from low to high: e.g. scores in an IQ test, memory test or personality questionnaire.
  • Nominal Data – data in the form of categories of information: for example, number of students studying particular participants in college.

For the following examples decide whether data is ordinal or nominal.

Height, eye colour, borough of London lived in, stress score, favourite animal, skill at driving, reaction speed.

  • Interval Data – Ordinal data which has either been separated into intervals: e.g. 0-5, 6-10, 11-15, 16-20, etc.

PROCEDURES FOR STATISTICS TESTS 

In the exam you are only required to know about how to conduct inferential statistics using the Sign Test , however all statistics tests follow the basic principles below.

  • Data from an experiment or correlation study is processed through a number of statistical/mathematical formulae. This will eventually produce one single number which ‘describes’ the data as a whole – this is referred to as the Calculated/Observed Value (OV)
  • The OV is then compared to a Critical Value (CV). This is a number found by cross-referencing certain information on a table of statistical significance .
  • Different statistics tests have different rules
  • In some tests if the OV > CV then the statistics test shows that we have found a significant difference/correlation and can, therefore, accept the experimental hypothesis . If the OV < CV we reject the experimental hypothesis .
  • In other tests the reverse is true: e.g. if OV < CV we accept the experimental and reject the null.
  • In the exam you will be told which of the 2 rules above applies to the statistics test concerned.
  • At a basic level, therefore, statistical analysis of data is a way of establishing whether we have or haven’t found significant results.

LEVELS OF STATISTICAL SIGNIFICANCE AND PROBABILITY (P)

In theory, psychologists/scientists never say that their findings are 100% accurate and true – there is always a probability that although results seem to indicate particular findings they are incorrect and findings have occurred by chance .

The concept of level of significance is used to indicate to readers to what percentage probability we can say that a particular set of findings are accurate and true, and to what extent results may have simply occurred due to chance .

For most pieces of psychological research a significance level of P < 0.05 is used. This indicates a 95% probability that results are accurate and true and a <5% probability that results occurred due to chance.

Higher levels of significance can be set when the accuracy of research findings is more important: e.g. in trials of a new drug. Thus findings which are significant at P < 0.01 mean that researchers are 99% confident results are true and there is only a 1% probability they occurred due to chance.

< m eans ‘the same as or less than’.

AQA A LEVEL PSYCHOLOGY LEVELS OF SIGNIFICANCE

Depending on the results of statistical analysis of data we may find that results are significant at any one of the above levels of probability. The higher the level of probability – the more significant, and therefore stronger, our results are.

TYPE 1 & TYPE 2 ERRORS

Type 1 errors – calling something true when it’s false.

When a statistics test indicates that the experimental hypothesis should be accepted, but in fact, the results are due to chance random factors . If the level of significance is set at 5%, there will always be a 1/20 chance of a type 1 error.

Clearly, the higher the level of significance (e.g. P < 0.1), the greater the chance that a type 1 error will occur (in this case 10%).

Type 2 errors – calling something false when it’s true.

When a statistics test indicates that the experimental hypothesis should be rejected, but in fact, the results are significant .

Clearly, the lower the level of significance (e.g. P < 0.005), the greater the chance that a type 2 error will occur.

 >>>>>>>>

THE SIGN TEST ( Psychology A-level revision)

The Sign Test is the 1 statistics test you do need to know how to fully conduct .

Signs tests are used in experiments with a repeated measures design and nominal data .

Example and procedures

We could conduct a study into whether there is a difference in people’s memory for a list of 10 words they’ve been read (DV = memory score x/10) depending on whether they heard the words in quiet conditions (1 st condition of the IV) or noisy conditions (2 nd condition of the IV). We would use a 1-tailed hypothesis for this study as previous research indicate that noise would disrupt memory ability

Once the experiment is conducted data (results from participants) is put into a results table .

AQA A LEVEL PSYCHOLOGY S TEST

Steps to calculate Sign Test

  • Subtract the score for the experimental condition from the control condition . If the result is negative add a minus – sign; if it’s positive add a + sign; if there’s no difference add a 0
  • Count the number of times the less frequent sign occurs . In the above example, the + sign is the least frequent. Call this value S . Therefore, S = 2
  • Count the total number of + and – signs . Call this value Therefore, N = 7
  • Decide whether a 1 or 2-tailed hypothesis was used . In the above example, we used a 1-tailed hypothesis.
  • Consult the table of statistical significance (below) for the Sign Test to find the critical value (CV).
  • Look down the left hand column marked N until you get to the total number of + and – In the case described N = 7 .
  • Cross reference N with the columns for either 1 or 2-tailed test (depending on whether your hypothesis is 1 or 2-tailed) and the Level of Significance value 05 (this is your Level of Significance – P < 0.05 ). In the case above this gives a value of 0 . Call this value the critical value (CV) . Therefore, CV = 0.
  • If the critical value ≥ S then we have found a significant difference between how the 2 conditions of the IV affected the DV: i.e. there is a significant difference in how noisy and quiet conditions affect memory ability. In the example above the critical value (CV = 0) is not greater than S (S = 2) therefore, we have not found a significant difference.

  Table of Critical Values for the Sign Test

AQA A LEVEL PSYCHOLOGY S TEST CRITICAL VALUES

Experimental Design: Types, Examples & Methods

Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

Experimental design refers to how participants are allocated to different groups in an experiment. Types of design include repeated measures, independent groups, and matched pairs designs.

Probably the most common way to design an experiment in psychology is to divide the participants into two groups, the experimental group and the control group, and then introduce a change to the experimental group, not the control group.

The researcher must decide how he/she will allocate their sample to the different experimental groups.  For example, if there are 10 participants, will all 10 participants participate in both groups (e.g., repeated measures), or will the participants be split in half and take part in only one group each?

Three types of experimental designs are commonly used:

1. Independent Measures

Independent measures design, also known as between-groups , is an experimental design where different participants are used in each condition of the independent variable.  This means that each condition of the experiment includes a different group of participants.

This should be done by random allocation, ensuring that each participant has an equal chance of being assigned to one group.

Independent measures involve using two separate groups of participants, one in each condition. For example:

Independent Measures Design 2

  • Con : More people are needed than with the repeated measures design (i.e., more time-consuming).
  • Pro : Avoids order effects (such as practice or fatigue) as people participate in one condition only.  If a person is involved in several conditions, they may become bored, tired, and fed up by the time they come to the second condition or become wise to the requirements of the experiment!
  • Con : Differences between participants in the groups may affect results, for example, variations in age, gender, or social background.  These differences are known as participant variables (i.e., a type of extraneous variable ).
  • Control : After the participants have been recruited, they should be randomly assigned to their groups. This should ensure the groups are similar, on average (reducing participant variables).

2. Repeated Measures Design

Repeated Measures design is an experimental design where the same participants participate in each independent variable condition.  This means that each experiment condition includes the same group of participants.

Repeated Measures design is also known as within-groups or within-subjects design .

  • Pro : As the same participants are used in each condition, participant variables (i.e., individual differences) are reduced.
  • Con : There may be order effects. Order effects refer to the order of the conditions affecting the participants’ behavior.  Performance in the second condition may be better because the participants know what to do (i.e., practice effect).  Or their performance might be worse in the second condition because they are tired (i.e., fatigue effect). This limitation can be controlled using counterbalancing.
  • Pro : Fewer people are needed as they participate in all conditions (i.e., saves time).
  • Control : To combat order effects, the researcher counter-balances the order of the conditions for the participants.  Alternating the order in which participants perform in different conditions of an experiment.

Counterbalancing

Suppose we used a repeated measures design in which all of the participants first learned words in “loud noise” and then learned them in “no noise.”

We expect the participants to learn better in “no noise” because of order effects, such as practice. However, a researcher can control for order effects using counterbalancing.

The sample would be split into two groups: experimental (A) and control (B).  For example, group 1 does ‘A’ then ‘B,’ and group 2 does ‘B’ then ‘A.’ This is to eliminate order effects.

Although order effects occur for each participant, they balance each other out in the results because they occur equally in both groups.

counter balancing

3. Matched Pairs Design

A matched pairs design is an experimental design where pairs of participants are matched in terms of key variables, such as age or socioeconomic status. One member of each pair is then placed into the experimental group and the other member into the control group .

One member of each matched pair must be randomly assigned to the experimental group and the other to the control group.

matched pairs design

  • Con : If one participant drops out, you lose 2 PPs’ data.
  • Pro : Reduces participant variables because the researcher has tried to pair up the participants so that each condition has people with similar abilities and characteristics.
  • Con : Very time-consuming trying to find closely matched pairs.
  • Pro : It avoids order effects, so counterbalancing is not necessary.
  • Con : Impossible to match people exactly unless they are identical twins!
  • Control : Members of each pair should be randomly assigned to conditions. However, this does not solve all these problems.

Experimental design refers to how participants are allocated to an experiment’s different conditions (or IV levels). There are three types:

1. Independent measures / between-groups : Different participants are used in each condition of the independent variable.

2. Repeated measures /within groups : The same participants take part in each condition of the independent variable.

3. Matched pairs : Each condition uses different participants, but they are matched in terms of important characteristics, e.g., gender, age, intelligence, etc.

Learning Check

Read about each of the experiments below. For each experiment, identify (1) which experimental design was used; and (2) why the researcher might have used that design.

1 . To compare the effectiveness of two different types of therapy for depression, depressed patients were assigned to receive either cognitive therapy or behavior therapy for a 12-week period.

The researchers attempted to ensure that the patients in the two groups had similar severity of depressed symptoms by administering a standardized test of depression to each participant, then pairing them according to the severity of their symptoms.

2 . To assess the difference in reading comprehension between 7 and 9-year-olds, a researcher recruited each group from a local primary school. They were given the same passage of text to read and then asked a series of questions to assess their understanding.

3 . To assess the effectiveness of two different ways of teaching reading, a group of 5-year-olds was recruited from a primary school. Their level of reading ability was assessed, and then they were taught using scheme one for 20 weeks.

At the end of this period, their reading was reassessed, and a reading improvement score was calculated. They were then taught using scheme two for a further 20 weeks, and another reading improvement score for this period was calculated. The reading improvement scores for each child were then compared.

4 . To assess the effect of the organization on recall, a researcher randomly assigned student volunteers to two conditions.

Condition one attempted to recall a list of words that were organized into meaningful categories; condition two attempted to recall the same words, randomly grouped on the page.

Experiment Terminology

Ecological validity.

The degree to which an investigation represents real-life experiences.

Experimenter effects

These are the ways that the experimenter can accidentally influence the participant through their appearance or behavior.

Demand characteristics

The clues in an experiment lead the participants to think they know what the researcher is looking for (e.g., the experimenter’s body language).

Independent variable (IV)

The variable the experimenter manipulates (i.e., changes) is assumed to have a direct effect on the dependent variable.

Dependent variable (DV)

Variable the experimenter measures. This is the outcome (i.e., the result) of a study.

Extraneous variables (EV)

All variables which are not independent variables but could affect the results (DV) of the experiment. Extraneous variables should be controlled where possible.

Confounding variables

Variable(s) that have affected the results (DV), apart from the IV. A confounding variable could be an extraneous variable that has not been controlled.

Random Allocation

Randomly allocating participants to independent variable conditions means that all participants should have an equal chance of taking part in each condition.

The principle of random allocation is to avoid bias in how the experiment is carried out and limit the effects of participant variables.

Order effects

Changes in participants’ performance due to their repeating the same or similar test more than once. Examples of order effects include:

(i) practice effect: an improvement in performance on a task due to repetition, for example, because of familiarity with the task;

(ii) fatigue effect: a decrease in performance of a task due to repetition, for example, because of boredom or tiredness.

Print Friendly, PDF & Email

  • Account details

AQA A-level Psychology Research Methods

This section provides revision resources for AQA A-level psychology and the Research Methods chapter. The revision notes cover the AQA exam board and the new specification. As part of your A-level psychology course, you need to know the following topics below within this chapter:

  • A-Level Revision
  • AQA Psychology
  • Research Methods
  • We've covered a number of the topics within research methods already in our GCSE psychology research methods content since there is significant overlap between GCSE and A-level. This section will look to cover both AS and A level psychology content for research methods.

The Experimental Method

The AQA AS/A level psychology specification states you need to know the following for research methods:

  • Experimental method. Types of experiment, laboratory and field experiments; natural and quasi-experiments.

The experimental method is a scientific method that involves the manipulation of variables to determine cause and effect.

Participants are usually randomly allocated without bias to different testing groups which results in the groups being fairly similar. The procedures within the experiment should also be standardised, which means they are kept the same for all participants.

Within the experiment, the researcher will manipulate an independent variable (IV) to see if this has an effect on the dependent variable (DV). For example, the consumption of a stimulant such as coffee (IV) may be manipulated to see its effect on reaction time (DV).

Within an experiment, variables need to be operationalised so they can be manipulated and measure the effect. Some variables are more difficult to operationalise and in turn only allow one aspect of a variable to be measured. Without the ability to operationalise variables, the results will be unreliable and impossible to replicate to determine their validity.

Types of Experiment

Research methods for A level psychology identifies 4 different types of experiments we need to know about which are:

  • Laboratory experiments,
  • Field experiments,
  • Natural experiments 
  • Quasi-experiments.

We've covered many of these already in our GCSE psychology content which we have linked to at the beginning of this chapter (scroll to the top, in yellow) but for ease of use, we've copied over the main elements.

Laboratory Experiments

"Laboratory experiments are experiments that are conducted in a controlled setting , usually a research laboratory where participants are aware of being observed and part of a study. Laboratory experiments tend to have high internal validity because researchers can control all the variables so the main differences between the experimental condition and control group are only the independent variable whose effect is being monitored. This allows researchers to more confidently assume that any differences between the conditions are due to the independent variable."

Source: Laboratory Experiments (GCSE Psychology)

Field Experiments

A field experiment is conducted in a more natural or everyday environment , unlike the laboratory experiment where the behaviour being measured is more likely to occur. The field experiment can be conducted anywhere in real-world settings with researchers manipulating an independent variable to measure its impact on the dependent variable. A field experiment can include confederates that participants are unaware of also being involved to test their response in the field setting. One key difference between a field experiment compared to a laboratory experiment, are participants may not be aware of being observed or studied. This is in an attempt to generate more realistic behaviour or responses from them that can generalise to real-world settings.

Source: Field Experiments (GCSE Psychology)

Natural Experiments

"A natural experiment is conducted when ethical or practical reasons to manipulate an independent variable (IV) are not possible. It is therefore said that the IV occurs 'naturally'. The dependent variable (DV), may however, be tested in a laboratory, for example, the effects of institutionalisation in some form, which may occur naturally due to imprisonment or disruption of attachment through the care system and how it may affect psychological development such as intellect or emotional development. Another good example of a natural experiment is the study by Charlton et al. (2000) which measured the effects of television. Prior to 1995, the people of St. Helena, a small island in the Atlantic had no access to TV however it's arrival gave the researchers to examine how exposure to western programmes may influence their behaviour. The IV in this case was the introduction of TV which was not controlled by researchers and something they took advantage of would be practically difficult to control. The DV was measures of pro or anti-social behaviours that were assessed through the use of questionnaires, observations and psychological tests. These types of experiments would either impractical or unethical to implement and therefore cases where this occurs naturally due to normal circumstances may be examined through natural experiments."

Source: Natural Experiments (GCSE Psychology)

Quasi-Experiments

In quasi-experiments, the independent variable (IV) is naturally occurring, similar to a natural experiment, however, the dependent variable (DV) may be measured in a laboratory. The key feature of a quasi-experiment is that the IV has not been created by anyone. An example where an IV might be occurring naturally would be a study of gender where males and females are compared.

Quasi-experiments are often used when it might be unethical to manipulate an IV and a common feature of such experiments is that random allocation of participants is not possible.

Strengths and Weaknesses of Quasi-Experiments

  • A weaknesses of quasi-experiments is randomisation is not used with the samples. This limits the study's ability to draw a causal association between cause and effect but also rule out confounding variables which are more likely to occur. This would make results less reliable and difficult to replicate due to the lack of control over the IV.
  • Another issue with quasi-experiments using non-random samples is this increases the possibility of having groups that are not comparable due to significant differences in the samples. This means the results may be due to these significant differences rather than the IV that is being measured which would mean study's lack internal validity and may not be measuring what they intended to.
  • A strength of quasi-experiments is it allows researchers to test a naturally occurring IV that may be unethical to test in the context of an experiment. This avoids ethical issues that would prevent such an experiment from taking place.
  • Another strength of quasi-experiments is they can be argued to be more realistic and have ecological validity as they look to test something that is naturally occurring. Therefore, the behaviours observed should also be more realistic and have validity.

Observational Techniques

  • Observational techniques. Types of observation: naturalistic and controlled observation; covert and overt observation; participant and non-participant observation.

During observational study, a researcher will watch or listen to participants engaging in whatever behaviour is being studied and record these observations. An important aspect of observations is they are often used in an experiment as a way to measure the dependent variable. Therefore, observations are less of a research method and more of a technique that is used in conjunction with other research methods.

There are different types of observational techniques that are used which we will explore.

Naturalistic and Controlled Observations

In a naturalistic observation, behaviour is studied in a natural situation where everything has been left as it would be normally without interference from the researcher. Examples of naturalistic observations might include children playing in their normal environment i.e. a nursery or an animal being observed in an environment that is natural to them such as a zoo (if raised in captivity) or the wild. 

During such observations, researchers will normally take great care to not to intrude or interfere with the behaviour they are observing to ensure the behaviour is realistic.

Controlled observations involve variables in the environment being altered by the researcher which would reduce the 'naturalness' of the environment. This could therefore alter the naturalness of the behaviour being studied too. Participants are also more likely to be aware of being observed as the study may be conducted in a laboratory setting.

Controlled observations allow researchers to investigate the effects of one variable on another more directly (the IV on the DV) and also allows researchers to randomly assign participants to different groups for comparison.

Evaluating Naturalistic and Controlled Observations

  • A strength of naturalistic observations is the behaviour observed is more realistic as it occurs in a natural habitat and therefore the findings are seen to be more valid and applicable to generalisation. Further strengths of naturalistic observation includes the ability to study something that may be unethical or difficult to setup as the independent variable is naturally occurring. For example, observing animals in their natural habitat would be difficult to setup in an artificial setting, or predatory behaviour animals engage in that might be deemed unethical to recreate.
  • Weaknesses of naturalistic observation include the inability to manipulate variables which makes it difficult to establish causal relationships with certainty. As researchers are unable to isolate an independent variable on its own with naturalistic observations, it is possible that the dependent variable observed may be a consequence of other confounding variables that haven't been controlled for. 
  • Another weakness is the information gathered may be subjective and based on the researchers own interpretations and observer biases. As the study lacks control to isolate the independent variable, the researcher observes the behaviour and makes recording, sometimes against defined criteria. An issue with this is, it may still be subject to interpretations, mistakes and biases. 
  • A strength of controlled observations is they give researchers the ability to isolate an independent variable more directly through laboratory settings. This greater control allows researchers to measure how the IV affects the DV with greater certainty and limit extraneous variables from influencing the results. 
  • A weakness of controlled observations is they are less realistic and lack ecological validity due to the artificial setting which is normally a laboratory setting. This means the behaviour by participants may not be indicative of real world behaviour as the nature of the experiment is artificially setup. 
  • Another issue with controlled observations is the risk of demand characteristics. Observers are aware they are being observed and may engage in behaviour that either looks to please researchers and their expectations or be different to what would normally happen in the real world. Due to this, the results collected may not generalise to real world behaviour and be invalid.

Covert and Overt Observations

Covert observation refers to studies where participants are unaware that they are being observed by researchers. This might involve naturalistic experiments conducted in everyday environments for participants or animals.

Overt observation refers to studies where participants are aware they are being observed. This usually involved a controlled environment such as a laboratory setting.

Participant and Non-Participant Observation

Participant observation involves the observers/researchers becoming actively involved in the situation being studied to gain a more 'hands-on' perspective. An example of participant observation would be Milgram's Obedience study.

Non-participant observation means observers/researchers will not become actively involved in the behaviour being studied. An example of this would be Ainsworth's Strange Situation study.

Self-Report Techniques

Psychologists attempt to understand behaviour and self-reporting techniques require participants to report on themselves. This is typically done by having participants answer questions or respond in some way to statements. Two principle methods for self-reporting include the use of questionnaires and interviews which may be structured interviews or unstructured interviews.

Questionnaires

A questionnaire is a list of predetermined questions to which participants are required to respond. 

Leave a Reply Cancel reply

You must be logged in to post a comment.

Get Free Resources For Your School!

Welcome Back.

Don’t have an account? Create Now

Username or Email Address

Remember Me

Create a free account.

Already have an account? Login Here

Research Methods: Techniques

Types of experiments.

Laboratory: Conducted in a highly controlled environment. The IV is manipulated to see the impact on the DV, whilst the effects of other variables are minimised as far as possible. For example, giving researchers lists of words to remember, giving them another task to prevent rehearsal, then testing their recall of the information.

  • Advantages : extraneous variables are closely controlled, meaning the IV is likely to have affected the DV, increasing the internal validity of the study. Research can be easily repeated as there will be a controlled, standardised procedure, increasing the reliability of the results
  • Disadvantages : artificial nature of the set-up means that the results may not reflect ‘real-life’ behaviour, so reducing the external validity of the study. Participants know they are being tested so may change their behaviour (demand characteristics). Tasks given in the research may not be reflective of everyday tasks (lack of mundane realism).

Field: The experimenter manipulates an IV in a more natural setting. For example, Piliavin (1969) got a confederate to collapse on a train when smelling of alcohol or carrying a walking stick, and seeing how many people helped in each condition.

  • Advantages : higher mundane realism than lab experiments, therefore higher external validity. Often participants won’t know they are being studied, so demand characteristics are less of an issue.
  • Disadvantages : harder to control extraneous variables, so harder to know if the IV has affected the DV. If participants are unaware they are being studied this raises ethical issues (lack of informed consent).

Natural: The experimenter studies the effects of a naturally occurring IV. Participants may still be studied in a lab-type setting to see the effects, but the IV is not manipulated by the researcher. For example, Williams (1986) looked at the effects on gender attitudes after the introduction of TV to a small town in Canada.

  • Advantages : high external validity, as the IV is naturally occurring. The effects can be tested of factors that could not be manipulated by the researcher (e.g., the effects of lack of attachment in Romanian orphans).
  • Disadvantages : even less control over extraneous variable than field experiments. Participants can’t be randomly allocated to conditions, introducing the possibility of bias. Naturally occurring IVs may be rare, so studies can’t be repeated.

Quasi: The IV is based on an existing difference between people. For example, gender differences in attitudes towards food.

  • Advantages : can be tested under controlled conditions (as in the example above), increasing the scientific credibility of the research.
  • Disadvantages : participants can’t be randomly allocated to conditions, introducing possible confounding variables.

Observational Techniques

Observations involve watching and recording people’s behaviour in a natural setting. Observations by themselves are non-experimental, but observational techniques can be used as part of an experiment. Types include:

Naturalistic and controlled: Naturalistic observations take place within a natural, non-manipulated environment, for example in a workplace or school. Controlled observations are more manipulated, for example the Strange Situation, so that variables are more controlled and effects of particular situations can be seen.

  • Evaluation: Naturalistic - high in external validity (as they are very true-to life), but lower levels of control. Controlled- lower external validity, but more control (allowing for easier replication).

Covert and overt: Covert observations take place without the participants being aware that they are being watched. Overt observations are when the participant does know they are being watched, and have given prior consent to do so.

  • Evaluation: Covert- no participant reactivity, so more truthful behaviour is shown, but there are ethical issues (lack of consent). Overt- participants may change their behaviour, but they are more ethically sound.

Participant and non-participant: Participant observations are when the researcher themselves takes part, for example by joining the workforce in a workplace. Non-participant observations are when the researcher does not actually participate, but just observes.

  • Evaluation: Participant- the researcher gets a greater insight into the experiences of those being observed, but they may lose objectivity as they become part of the study, friendly with other participants, and so on. Non-participant- the researcher is more likely to remain objective, but may lack the extra insight gained from being a participant themselves.

Self-Report Techniques

Self-report techniques involve asking people about their behaviour.

Questionnaires

These are sets of questions which participants complete independently, for example on their attitudes towards something or beliefs about something. Questionnaires can be used as part of an experiment (e.g. measuring locus of control through a questionnaire, and then testing the participants in some way). Questions can be open , in which the participant can answer in any way they wish- for example, ‘why do you think people follow orders?’. This produces qualitative data, which is rich in detail. Alternatively, questions can be closed , in which there are a set of answers participants must choose from- for example ‘do you think that people follow orders because of (a) the situation they are in, or (b) their personality?’. This produces quantitative (numerical) data which can be easily counted.

Evaluation:

  • Questionnaires can be sent to (potentially) thousands of people, without the researcher needing to be present whilst they are completed. Potentially there is access to a very large sample.
  • The responses will usually be easy to analyse, especially if the questions are closed.
  • The responses may be biased:
  • Social desirability bias- not being truthful to try to present yourself in a better light (underestimating the amount of alcohol you drink)
  • Response bias- answering all questions in a similar way and not reading the questions properly (ticking ‘yes’ for everything)
  • Acquiescence bias- a tendency to agree with things, meaning that the questionnaire is measuring a tendency to agree rather than what it is intending to measure.

These are face-to-face interactions between the researcher and participant. They can be structured , where the interviewer asks a set of pre-determined questions and doesn’t deviate from them; unstructured , where the interviewer creates questions in response to the participant’s answers during the interview; or semi-structured , where there are some pre-set questions but also the opportunity to ask extra questions as well.

  • Structured interviews can easily be repeated and the data are more easy to analyse, but they are inflexible and can’t include additional information.
  • Unstructured interviews are difficult to repeat and hard to analyse for trends and patterns, but allow more flexibility to investigate answers in more depth.
  • As in questionnaires, participants may not be honest with their answers, reducing the validity of the responses. The interviewer may be able to get more of a sense of how truthful the participant is being than in a questionnaire, however.

Correlations

A correlation measures the relationship between two variables, and the strength of that relationship. They are plotted on a graph known as a scattergram/scattergraph. Correlations can be positive , meaning that as one variable increases, so does the other one, for example as the number of people present increases, so does the rate of obedience. Negative means that one variable increases whilst the other decreases, for example as the level of anxiety increases, the accuracy of eyewitness testimony decreases. No correlation means there is no link between the two variables.

Research Methods: Techniques, figure 1

Correlations differ from experiments. In an experiment, the researcher manipulates an IV, attempts to control all other variables, and records the effect on the DV. In a correlation, there is no manipulation of either variable- they are known as co-variables . Therefore, it cannot be concluded that one co-variable has caused the other to change, just that there is a correlation between them. Another variable might actually have caused the change. In an experiment, causal links can be established, as the impact of other variables are minimised.

  • Correlations can be used as a starting point for further research. If no relationship is found, then there is no need to undertake experiments into the subject.
  • They are quite easy to conduct- the researcher just needs to find two sets of data to compare.
  • Demonstrating a cause-effect link is not possible, as other variables might be involved.
  • The findings of correlational studies are often reported as facts, leading to misinterpretations, which could have consequences (for example, black people are more likely to be convicted of knife crimes than non-black people, but this does not mean that ethnicity causes knife crime- black people are more likely to come from poorer socio-economic backgrounds so are more likely to turn to knife crime due to this).

Content Analysis

This is a type of observational research in which something that has been produced (such as newspaper articles and television adverts) is studied. The aim is to analyse the communication in order to detect trends and make conclusions.

  • Content analysis is high in external validity, as what is being analysed is the material that people consume in ‘real life’. It allows for the investigation of potentially sensitive topics, without the need for consent, as the material is in the public domain.
  • Content analyses may not take into account the motivations of the people that created the content in the first place, potentially weakening the validity of conclusions that can be drawn.

Case Studies

A case study is an in-depth investigation, usually of one person or a small group of people. Individuals studied may have particular conditions or unusual characteristics. Case studies often produce qualitative data, as unstructured interviews and observations may be used. The participant may be asked to complete laboratory experiments, which would be more likely to produce quantitative data. Often case studies are longitudinal, as the participant is studied over many years. Examples of case studies include HM, whose memory was damaged following an operation.

  • Case studies produce rich, detailed, in-depth data, giving a close insight into particular behaviours. They can also help understanding of the behaviours of ‘normal’ individuals, for instance HM’s case showed that there are separate stores for long-term and short-term memory.
  • It can be difficult to generalise the findings from case studies, as the sample sizes are so small. Data is often collected retrospectively, so relies on the recall of the participant or their friends or family members, which may be inaccurate. Comparing to a control group is often not possible, and the researcher may become personally involved with the participant over a number of years, making them less objective.

Couples Therapy

Research Methods

Aims & Hypotheses

The aim states the purpose of the investigation and is driven by a theory. The aim is a broad starting point that gets narrowed down into the hypothesis.

A hypothesis (plural hypotheses) is a precise, testable statement of what the researchers predict will be the outcome of the study. Hypotheses must always include the DV and both levels of IV.

An experimental hypothesis predicts what change(s) will take place in the dependent variable when the independent variable is manipulated.

The null hypothesis states that there is no relationship between the two variables being studied e.g. There will be no difference in the number of words recall between the music and no music condition.

A directional hypothesis (one-tailed) that states the direction of the difference or relationship (negative or positive correlation). This is used when previous research findings are available to help make the prediction e.g. Ppts in the no music condition will recall less words compared to the music condition.

A non-directional (two tailed) states that there will be a difference between the two groups/conditions but it does not state where the difference will occur. This is used when there are no previous research findings available to help make the prediction e.g. There will be a difference in the number of words recall between the music and no music condition.

Variable s and controls

The IV is the variable that is manipulated and changed between the conditions.

The DV is the data that is collected.

Operationalisation is a clear identification/definition of the observable actions/behaviours to be recorded which enables the behaviour under review to be measured objectively.

A control condition does not involve exposure to the treatment or manipulation of the IV.

An experimental condition does involve exposure to the treatment or manipulation of the IV.

Extraneous variables

Situational extraneous variables are aspects of the environment might affect the participant's behavior e.g. noise, temperature, lighting, time of day

Participant extraneous variables are any characteristic of a participant's that could affect the DV when it is not intended to e.g. IQ, mood, age, gender

Demand characteristics are when participants detect cues from the environment which makes them guess the aim of the study and they change their behaviour as a result.

Investigator effects are where a researcher (consciously or unconsciously) acts in a way to support their prediction and therefore affects the DV.

Order effects are when when participants' responses in the are affected by the order of conditions to which they were exposed i.e. fatigue effects or practice effects.

Participant reactivity is when participants alter their behaviour because they know they are part of a study and this can threaten the internal validity of the results.

Extraneous variables  are variables other than the IV which could affect the DV and should be controlled.

Confounding variables are variables which have affected the DV and have threatened the validity of the results.

Controlling variables

Random allocation is when each participant has an equal chance of being allocated to either condition. All experiments should use random allocation apart from quasi-experiments which cannot.

Counterbalancing is a way to control for order effects. Half of the participants do condition A then B and the other half do B then A. It means that any order effects will be spread out equally across both conditions.

Randomisation is used in the presentation of trials in an experiment to enable to stimuli to be presented in a random manner so the order of presentation does not have any effect on the DV e.g. words presented in a list.

Standardisation is the process in which procedures used in research are kept exactly the same apart for the IV.

Single blind study is when the participants are deliberately kept ignorant of either the group to which they have been assigned or key information in order to reduce demand characteristics.

Double blind study is when the participants and the researcher are deliberately kept ignorant of either the group to which the participants belong or key information in order to reduce demand characteristics and investigator effects.

Type of experiments

Lab experiments

A laboratory experiment is an experiment conducted under highly controlled conditions and the IV is deliberately manipulated by the researcher. Participants will typically know they are in the experiment.

High control – high internal validity

Standardised procedure – reliable methodology

Limitations

Low ecological validity

Low mundane realism

Chances of demand characteristics

Field experiments

A field experiment is an experiment conducted in a natural environment but the IV is deliberately manipulated by the researcher. Participants typically will not know they are in the experiment.

High ecological validity

High mundane realism

Less chance of demand characteristics

Extraneous variables – questionable internal validity

Hard to replicate to check reliability of the findings

Natural experiments

A natural experiment is an experiment conducted in a natural environment and the IV is naturally occurring e.g. introduction of the internet to a remote community or a natural disaster

Allows research to be carried out that would not normally be ethically viable

Confounding variables do not allow for causality to be established

Cannot be replicated to check reliability of the findings

Quasi experiments

A quasi experiment is an experiment conducted in a lab or natural environment. The IV is a naturally occurring characteristic of the participant e.g. autism, OCD etc. Random allocation is not possible and therefore it is not a ‘true experiment’.

See the previous types of experiments for strengths and limitation depending on where the study is carried out.

Experimental design

Independent groups design is when participants only take part in one condition.  

Less chance of demand characteristics affecting the DV

Order effects are eliminated

Participant EV are a threat to internal validity of the results

More costly and time consuming as more participants are needed

Repeated measures design is when participants take part in both conditions.

Participant extraneous variables are reduced

Less costly and time consuming as fewer participants are needed

Increased chance of demand characteristics affecting the DV

Order effects are a threat to internal validity

Matched pairs design is when participants only take part in one condition but they are matched on a key characteristic with another participant. Random allocation will be used to allocate one of the pairs and the other participant will go into the other condition.

Participant extraneous variables are controlled for

Less costly and time consuming as fewer participants are needed compared to independent groups design

Demand characteristics are reduced

Participants can never truly be matched – there will always be some differences

It can be time consuming to match participants

Sampling methods

The target population is the group of people the researcher wants to study. They cannot study everyone so they have to select a sample.

A sample is a small group of people who represent the target population and who are studied.

It is important the sample is representative of the target population.

Random sampling

This is a sampling technique in which every member of the target population has an equal chance of being chosen.

How to do it 

1 You need a sampling frame which is a complete list of all members of the target population is obtained

2 All of the names on the list are assigned a number

3 The sample is selected randomly for example, a computer-based randomiser or picking names from a hat

There is no bias with this method. Every person in the target population has an equal chance of being selected. Therefore the sample is more likely to be (but not definitely) representative.

Disadvantages

Impractical: It takes more time and effort than other sampling methods because you need to obtain a list of all the members of the target population, identify the sample and then contact the people identified and ask if they will take part and not all members may wish to take part.

Not completely representative: unbiased selection does not guarantee an unbiased sample, for example, the random selection may only end up generating an all female sample, making the sample unrepresentative and therefore not generalisable.

Opportunity sampling

A technique that involves recruiting anyone who happens to be available at the time of your study.

How to do it

The researcher will go somewhere where they are likely to find their target population and ask people to take part.

Simple, quick and easy and cheap as you are just using the first participants that you find.

Useful for naturalistic experiments as the researcher has no control over who is being studied.

Unrepresentative: the sample is likely to be biased by excluding certain types of participants which means that we cannot confidently generalise the findings e.g. an opportunity sample collected in town during the day on a week day would not include those who are at work or college

Volunteer sampling

This is when people actively volunteer to be in a study by responding to a request which has been advertised by the researcher (they are self-selecting). The researcher may then select only those who are suitable for the study.

Participants self-select by responding to an advert.

Most convenient and economical way method to gather a wide range of people with particular requirements for a study compared to a random sample as they have already agreed to participate.

Can reach a wide audience, especially online.

Sampling bias - particular people (with higher levels of motivation and who have more time) are more likely to volunteer so may be harder to generalise. What is it that has made them decide to take part? This may lead to a bias as they may all display similar characteristics.

Systematic sampling

Systematic sampling involves selecting names from the sampling frame at regular intervals. For example, selecting every fifth name in a sampling frame.  

A sampling frame is produced, which is a list of people in the target population organised into, for example, alphabetical order.

A sampling system is nominated (e.g every 3rd, 6th or 8th person etc).

The researcher then works through the sampling frame until the sample is complete.

Simple to conduct (still need a sampling frame) 

The population will be evenly sampled using an objective system, therefore reduces researcher bias increasing the chances of a representative sample.

Not truly unbiased/random because not all people have an equal chance of being selected therefore representativeness is not guaranteed.

Stratified sampling

For this method, participants are selected from different subgroups (strata) in the target population in proportion to the subgroup’s frequency in the population.

If a researcher wants to sample different age groups in a school, they first of all have to identify how many students are in each strata e.g. students aged 10-12, 13-15, 16-18

They then need to work out the % of each strata that makes up the target population. If there are 1000 students in the school and 300 of them are 10-12 years, 500 of them are 13-15 years and 200 are 16-18 the frequency of the subgroups is 30%, 50% and 20%.

The researcher then uses random sampling to select the sample. If the researcher wishes to have a sample of 50 participants, then 30% of the 50 should 10-12, 50% of the 50 should be 13-15 years and 20% should be 16-18 years which are selected at random from each strata.

Most representative than all sampling methods because there is an equal representation of subgroup, making generalisations of findings possible.

Disadvantage

Knowledge of population characteristics required: stratified samples require a detailed knowledge of the population characteristics, which may not be available.

Time consuming: dividing the population into stratums and then randomly selecting from each can take time and more admin time

Remember the saying ‘Can Do Can’t Do With Participants’ to help you remember the ethical guidelines.

Consent: This means revealing the true aims of the study, or at least letting the participant know what is actually going to happen.

Participants must be aware of what they are needed to do as part of the study in order to give valid consent.

If the study involves children parental consent must be obtained.

Deception:  This means deliberately misle ading or with holding information. Deceiving participants must be kept to a minimum.  Withholding details of the research to avoid influencing behaviour is acceptable, deliberately providing false information is not acceptable.  If telling the truth will not have an effect on results participants must be informed.

Confidentiality: The communication of personal information from one person to another and the trust this will be protected.  Psychologists need to be sure the information they publish will not allow their participants to be identified (keeping their identity secret may not be enough).

Debrief: If consent cannot be obtained (such as in a field experiment) participants must be fully debriefed afterwards.  This involves telling the participant about the experiment and then giving them the option of withdrawing their information if they wish

Withd raw: Even after giving consent participa nts still have the right to leave the experiment at any point in time. The p’s must be made aware of this when they sign the consent form. 

Protection from harm : Participants should be no worse off when they leave an experiment as to when they arrived. Risk is considered acceptable if it is no greater than what would be experienced in everyday life.

Validity and Reliability

Validity refers to the extent to which something is measuring what it is claiming to measure.

Internal validity refers to the extent to which a study establishes a cause-and-effect relationship between the IV and the DV.

External validity

Ecological validity refers to whether the data is generalisable to the real world.

Population validity refers to whether the data is generalisable to other groups of people.

Temporal validity refers to whether the data is generalisable to other time periods.

Test validities

Construct validity: This refers to the degree to which a test measures what it claims, or purports, to be measuring e.g. How effectively does a mood self-assessment for depression really measure the construct of ‘depression’? How effectively does an IQ test really measure ‘intelligence’?

Concurrent validity: This asks whether a measure is in agreement with a pre-existing measure that is validated to test for the same (or a very similar) concept. This is gauged by correlating measures against each other. For example, does a new test measuring intelligence, agree with an existing measure of intelligence e.g. Stanford-Binet Test?

Predictive validity: This is the degree to which a test accurately predicts a criterion that will occur in the future. For example, a diagnostic test of schizophrenia has low predictive validity as being diagnosed with sz can lead to very different outcomes. Some people continue to live ‘normal’ lives whilst others struggle with homelessness and drug abuse.

Face validity: A superficial form of validity where you apply a superficial and subjective assessment of whether or not your study or test measures what it is supposed to measure.

Reliability

Reliability refers to the extent to which something is consistent.

Test-retest reliability: This measures test consistency; the reliability of a test measured over time. i.e. if a person completed the same test twice at different times, are the scores are the same?

If the results on the two tests achieve a correlation co-efficient of 0.8 or above, we can assume the test is reliable.

In ter rater/observer reliability is the degree of agreement among raters.

If there is high correlation (0.8+) between the observers/ raters, the measure is reliable.

Observations

A researcher will simply observe behaviour of a sample and look for patterns. Like all non-experimental methods, in an observation we cannot draw cause and effect relationships.

Observations are used in psychological research in one of two ways, a method or a technique.

Observations can be understood about by considering its four main features:

The settings: Naturalistic v controlled

The data: Structured v unstructured

The participants: Overt v covert

The observers: Participant v nonparticipant

Naturalistic observations

This refers to the observation of behaviour in its natural setting. The researcher makes no attempt to influence the behaviour of those being observed. It is often done where it would be unethical to carry out a lab experiment.

High levels of ecological validity as carried out in a natural setting

P’s are less likely to be affected by demand characteristics as they are unaware they are being studied

Little control over EVs - hard to establish causality

Replication is often not possible - cannot check reliability of the findings

Controlled observations

This refers to an observation taking place in a controlled setting, usually behind a one way mirror so they cannot be seen. 

There is less risk of extraneous variables affecting the behaviour as it is in a controlled environment

The setting is artificial and therefore the results may lack ecological validity

Structured observations

An observation where the researcher creates a behavioural checklist before the observation in order to code the behaviour. Behaviour can be sampled using time or event sampling .

Researchers will use a standardised behaviour checklist to record the frequency of those behaviours (collecting quantitative data).  The target behaviour is split up into a set of behavioural categories (behaviour checklist) e.g. aggression may be categorised as punching, kicking, shouting etc.

The behaviours should:

Be observable (record explicit actions)

Have no need for inferences to be made

Cover all possible components behaviours

Be mutually exclusive/ not overlap (not having to mark two categories at the same time)

A pilot study is a small scale study carried out before the actual research. It allows the researchers to practise using the behaviour checklist/ observation schedule. 

It is not always possible to watch and record every behaviour so researchers use a systematic method of sampling it. 

Event sampling

Counting each time a particular behaviour is observed.

Strength: Useful when the target behaviour or event happens infrequently and could be missed if time sampling was used. 

Limitation: However, if the situation is too busy and there is lots of the target behaviour occuring the researcher could not record it all

Time sampling

Recording behaviour at timed intervals

 Strength: The observer has time to record what they have seen.

Limitation: Some behaviours will be missed outside the intervals - observations may not be representative.

Strengths of s tructured observations

The behavioural checklist (coding system) allows objective quantifiable data to be collected which can be statistically analysed

Allows for more than one observer (due to checklist) which can increase the reliability (inter-observer reliability)

Limitations  of s tructured observations

The pre-existing behavioural categories can be restrictive and does not always explain why the behaviour is happening

Unstructured observations

The observer note down all the behaviours they can see in a qualitative form over a period of time. No behavioural checklist is used.

They can generate in-depth, rich qualitative data that can help explain why the behaviour has occurred

Researchers are not limited by prior theoretically expectations

The observer can get drawn to eye catching behaviours that may not be representative of all behaviours occurring

More subjective and less comparable across researchers

Overt observations

Participants are aware that their behaviour is being studied, the observer is obvious.

It will better fulfil ethical guidelines (compared to covert)

Participants know they are being observed and therefore they may change their behaviour (participant reactivity)

Covert observations

Participants are unaware that their behaviour is being studied – the observed is covered.

P’s do not know they are being observed and therefore their behaviour is more likely to be natural (higher validity)

It can break many ethical guidelines as deception is used it may cause the p’s some psychological harm

Participant observations

The observer becomes involved in the participant group and may not be known to other ppts.

Being part of the group can allow the researcher to get a deep understanding of the behaviours of the group (increasing validity).

The presence of the researcher might influence the behaviour of the group.

The researcher may lose objectivity as they are part of the group.

Non-participant observations

The observer is separate from the participant group that are being observed.

Researchers observations are likely to be more objective as they are not influenced by the group

It is harder to produce qualitative data to understand the reasons for the behaviour

Self-report metho ds

Self-report techniques describe methods of gathering data where participants provide information about themselves e.g. their thoughts, feelings, opinions.

This can be done in written or oral form. The techniques generally used are:

Questionnaires

There are thousands of of standardised measures used in psychology for clinical purposes and for research. Examples include:

Adverse Childhood Experiences (ACEs) questionnaire

Beck Depression Inventory

Frost Multidimensional Perfectionism Scale (FMPS)

The Holmes and Rahe Stress Scale

Adolescent Attachment Questionnaire

McGill Pain Questionnaire

Questionnaires are a written self-report technique where participants are given a pre-set number of questions to respond to. They can be administered in person, by post, online, over the telephone, or to a group of participants simultaneously.

A closed question is when there are only a certain amount of choices available to answer. Closed questions give quantitative data and are easier to analyse.

There are many types of closed style questions such as: Likert scale and rank styles questions.

Open questions produce qualitative data as they allow participants to give a full, detailed answer and there is no restriction on what the participants can say. Open questions could lead to ideas for further investigation. Respondents can find open questions less frustrating than forced choice.

Standardised instructions: These are a set of written or recorded instructions that are given to the participant to ensure that all ppts receive them in the same way. This increases the reliability and validity of the research. It is used as a control to standardise the procedure.

Social desirability bias is reduced as no interviewer is present and questionnaires are often anonymous

A large amount of data can be collated very quickly which can increase the representativeness and generalisability

Data can be analysed easier than interviews (if mostly quantitative)

The options given may not reflect the p’s opinion and they may b forced into answering something which does not fit - lowering validity of the findings

The quantitative data produces less rich data than interviews

Interviews are self-report techniques that involve an experimenter asking participants questions (generally on a one-to-one basis) and recording their responses.

Structured interview has predetermined questions. It is essentially a questionnaire that is deliver face-to-face (or over the phone).

Standardised question means it can be replicated

Reduces differences between interviewers (consistency = higher reliability)

Quick to conduct

Interviewers cannot deviate from the topic or elaborate points

Mainly produces quantitative data which lacks insight

Unstructured interview has less structure. They may start with some predetermined questions and then new questions may develop during the interview depending on the answers given.

More flexibility allows for the collection of rich data which offers a deeper insight and for the interviewer to follow up, explore more or seek clarification

Difficult to analyse as they produce lots of qualitative data. The researcher should demonstrate reflexivity.

Interviewees may not be truthful due to social desirability bias which lowers the validity of the findings

Semi-structured interview is a mix of structured and unstructured and is often the most successful approach.

Most interviews involve an interview schedule, which is the list of questions that the interviewer intends to cover. This should be standardised for each participant to reduce the contaminating effect of interviewer bias.

The interviewer may take notes throughout the interview, although this can interfere with their listening skills. The interview may be audio or video recorded and analysed later. Any audio recordings must be turned into written data which is called an interview transcript . This must protect anonymity. ​​

Case studies

A case study involves the detailed study of a single individual or a small group of people.  Conducting a case study usually (but not always) involves the production of qualitative data .

Researchers will construct a case history of the individual concerned using interviews, observations, questionnaires or psychological testing to assess what they are (or are not) capable of. Testing may produce quantitative data .  Triangulation means using more than one method to check the validity of the findings.

Phineas Gage - the effect of damage to the prefrontal cortex on personality

Genie - investigating the effect of abuse/ neglect on development

David Reimer - investigating whether gender is biologically determined or socialised

HM - investigating the impact of damage to the hippocampus on memory

Clive Wearing - investigating the impact of damage to the hippocampus on memory

Case studies are generally longitudinal studies , which means they follow the individual or group over an extended period of time. The strength is they allows to look at changes over time. However, participants may drop out which can lead to a small sample size. The number of people dropping out is the attrition rate .

They offer high levels of validity as they go into depth and give a rich insight.

They allow multiple methods to be used (triangulation) = increasing validity.

They allow researchers to study events or complex psychological areas they could not practically or ethically manipulate.

Efficient as it only takes one case study to refute a theory.

Researcher bias: researchers can become too involved and lose their objectivity - misinterpreting or influencing outcomes.

Lack of control: there are many confounding variables that can affect the outcome.

As they are unique they can be difficult to replicate and therefore lack scientific rigour.

Correlational research

This design looks for a relationship or association between two variables (co-variables).

If the two variables increase together then this is a positive correlation

If one variable increases and the other decreases then this is a n egative correlation

If there is no relationship between variables this is called a zero correlation

A correlation can be illustrated using a scattergram . Scores for two variables are obtained which are used to plot one dot for that individual. The scatter of dots indicates the degree of correlation between the co-variables. This is a correlation co-efficient (or Pearson’s r). A strong correlation needs to be 0.8+ .

They can be used when it would be impractical or unethical to manipulate variables using another method

It can make use of existing data (secondary), and so can be a quick and easy way to carry out research

Often, no manipulation of behaviour is required. Therefore, it is often high in ecological validity because it is real behaviour or experiences.

Correlations are very useful as a preliminary research technique, allowing researchers to identify a link that can be further investigated through more controlled research.

Correlation does not equal causation. No cause and effect relationships can be inferred as the direction of the relationship is not know.

The relationship could be explained by a third intervening variable. Correlations are open to misinterpretation.

Types of data

Qualitative data

It gathers information that is not in numerical form but in words and often describing thoughts, feelings, opinions. It rich in detail and might include a reason as to why the behaviour occurred. Qualitative researchers are interested in trying to see through other people’s eyes.  Qualitative research acknowledges it is subjective, unlike the scientific approach which is objective.

Examples include answers given in an interview, descriptions of an observation, explanations or opinions in a questionnaire.

Provides a rich insight and understanding of an issue

It can help explain the why of a phenomena

It is less reductionist than quantitative data

More open to researcher bias in the interpretation of the results

Harder to analyse

Quantitative data

Quantitative data is numerical data that can be statistically analysed. It does not include a reason or explanation (the why) for the numerical answer given.

Examples include: Numerical data collected in experiments, observations, correlations and closed/rating scale questions from questionnaires all produce quantitative data.

Allows for easier analysis and comparison

Objective and scientific

Less chance of researcher bias

The why often cannot be answered

Can be viewed as reductionist as complex ideas are reduced to numbers

Many studies collect a mixture of both qualitative and quantitative data e.g. in Milgram - 65% fully obedient, but he also reported the comments of the observations and interviewed the ppts afterwards, providing further insight into the participants. They can be complimentary and are not mutually exclusive .

Primary data

Primary data is any data that has been collected by the psychologist for the purpose of their own research or investigation. It is of direct relevance to their research aim and hypothesis.

It can include:

Answers from a questionnaire

Notes from an observation

Results from an experiment

Gathered for the aim of the study

Replicable (check for  reliability of results)

Taken directly from the population (generalisability)

Researcher bias

Time and effort

Need a large sample to make it generalisable

Secondaary data

Secondary data is any data that already exists and was collected for another purpose.

I t can include:

Government statistics

Easier to access than primary

Large samples may exist e.g. Govt data

Data may not fit what the researcher wants to find out

May be of poor quality

Levels of d ata

Nominal level data is data that can be grouped into categories e.g. favourite lessons. There is no logical order to the categories. The most appropriate measure of average for this level of measurement is the mode. This is the lowest level of data.

Ordinal level data is data that is presented in ranked order (e.g. places in a beauty contest) but the values themselves have no meaning e.g. The person who comes 1st is not twice as beautiful at the person who comes 5th (out of ten competitors). The data may be subjective e.g. happiness scores. The most appropriate measure of average for this level of measurement is the median. This is the mid-level of data.

Interval level data is measured in fixed units with equal distance between points on the scale. For example, time measured in seconds. The most appropriate measure of average for this level of measurement is the mean. Psychological tests which are standardised (psychometric test) e.g. IQ scores, are classed as interval. This is the highest level of data.

Ratio level data is like interval but it has a true value of zero (it is not on the specification – so class it as interval).

Measures of central tendency

The value that occurs the most frequently in a data set.   It is used with nominal level data.

  How it is calculated : Identify the value tha t is most common. If there are two values in common then this is known as bimodal.

   

It is the only average to use when the data is nominal

It is easy to calculate

It is a very basic method and is not always representative of the data

When the data is bimodal or there is no mode it has limited usefulness

It is the middle score in a data set.   It is used with ordinal level data.

  How it is calculated : Arrange the scores from lowest of highest and identify the middle value.

It is less effected by extreme scores

It does not take account of all ppt scores and is therefore not representative

The mean is a mathematical average of a set of scores.   It is used with Interval and ratio level data.

  How it is calculated : 1. Arrange the scores from lowest of highest and identify the middle value

2. If there are even numbers in the data set, you locate the 2 middle values, add 1 and divide by 2.

The most sensitive as it takes into account all of the data = more representative of the scores of all ppts

Easily distorted by anomalous data

Measures of dispersion

The range is the difference between your highest and lowest values.   It is used with the median.

How it is calculated: Take the lowest value from the highest value and adding 1. Adding 1 enables the data to be rounding up or down to account for any margin of error.

Quick and easy to calculate

It does not take central values of a data set into account, and so it can be skewed by extremely high or low values.

Standard deviation

Standard deviation is a single value that tells us how far scores deviate (move away) from the mean.   It is used with the mean.

The larger the SD, the greater the dispersion/spread within a set of data. A large SD suggests that not all ppts were affected by the IV. A low standard deviation is better as it means that the data are tightly clustered around the mean, suggesting ppts responded in a similar way and the results are more reliable.

This is a much more precise measure than the range as it includes all values.

Extreme values may not be revealed

Graphs and charts

Bar charts are a simple and effective way of presenting and comparing data. They are most useful with nominal data because each bar represents a different category of data. It is important to leave a space between each bar on the graph in order to indicate that the bars represent ‘separate’ data rather than ‘continuous’ data. Different categories of data are known as discrete data. The bars can be drawn in any order.

Histograms are mainly used to present frequency distributions of interval data. The horizontal axis is a continuous scale. There are no spaces between bars, because the bars are not considered separate categories. They are used when the data is interval and is continuous.

A scatter gram (or scatter graph) is a graphical display that shows the correlation or relationship between two sets of data (or co-variables) by plotting dots to represent each pair of scores. A scatter gram indicates the strength and direction of the correlation between the co-variables.

Tables are a way of presenting quantitative data in a clear way that summaries the raw data. They normally show the measures of central tendency and measures of dispersion of each condition which then can be compared. The distribution of the data can also be assessed if the mode, median and mean is presented.  

Distribution of data

Normal distribution

The normal distribution is a probability distribution bell-shaped curve. It is the predicted distribution when considering an equally likely set of results. For example, if you measure height of all the people in your school.

The mean, median and mode are all in the exact midpoint (allow a tolerance of 0.5 in the exam).

Positive skew

A positive Skew is where most of the distribution is concentrated towards the left of the graph, resulting in a long tail on the right. Imagine a very difficult test in which most people got low marks with only a handful of students at the higher end. This would produce a positive skew.

The rule is that the median and mean is higher than the mode. This means that most people got a lower score than the mean.

Negative skew

The opposite occurs in a negative skew. A very easy test would produce a distribution where the bulk of scores are concentrated on the right, resulting in the long tail of anomalous scores on the left. The mean is pulled to the left this time (due to the lower scorers who are in the minority), with the mode dissecting the highest peak and the median in the middle. 

The rule is that the median and mean is lower than the mode.

Inferential statistics

Inferential statistics allow you to test a hypothesis or assess whether your data is generalisable to the broader population. They determine if the data is statistically significant.

T here are three factors which affect which test you use:

1. Whether you’re testing for a difference or a correlation (association)

2. The design of the investigation: independent groups (unrelated) OR repeated measures (related)

3. What type of data you have (nominal/ interval/ordinal)

To help you remember the table learn the phrase: Carrots should come mashed with suede under roast potatoes

                                                           Tests of difference                                                 Tests of association

                                              Unrelated                                Related

                                  (independent groups)     (matched pairs and

                                                                                     repeated measures)

Nominal                           Chi squared                            Sign test                                       Chi squared

Ordinal                         Mann Whitney                      Wilcoxon                                     Spearman’s rho

Interval                           Unrelated t-test                  Related t-test                                  Pearson’s R

The Sign test

The sign test is a non-parametric statistical test of difference that allows a researcher to determine the significance of their investigation. It is used in studies that have used a repeated measures design, where the data collected is nominal .  You need to know how to calculate this test.

1. Work out whether the data has changed by putting an + or – in the final column (the number is not needed)

2. Work out how many + or – there are (not any scores that are the same are removed and not counted)

3. The smallest number or + or – is your S value which you then need to use in a critical table to check whether the result is significant. 

4. The N value = the number of ppts – any 0 values

Levels of significance

The usual level of significance (alpha level) in psychology is 0.05. This means we can say that the chance of the results from the investigation being down to chance is 5% (1 in 20).  This means there is still up to a 5% chance that the observed results happened by chance (a fluke).  This is properly written as p ≤ 0.05 (p stands for probability).

There are some instances when a psychologist might choose to use a significance levels of 0.01. This means there is only up to a 1% chance that the observed results happened by chance. This is a stricter test and allows the researcher to be more confident.

Three factors to consider when comparing calculated values and critical values :

One-tailed/two-tailed hypothesis

Number of participants  (e.g. n=20)

Significance level (e.g. 0.05)

Writing up significance

The result is significant/ not significant because the observed value (T = 7) is higher / lower than the critical value of 11 with N = 10 and at the 0.05 level of significance.

For a sign test the N value is number of ppts – the number of ‘0’s.

Errors in significance testing

Type 1: False positive:  Rejecting the null hypothesis, when there is a possibility that the results were due to chance or other extraneous variables. Often caused by using a significance level that is too lenient e.g. 10%, 0.10, 1 in 10, p≤0.10. Not being cautious enough.

Type 2: False negative:  Accepting the null hypothesis, when there is a possibility that the results were significant. Often caused by using a significance level that is too strict e.g. 1%, 0.01, 1 in 100, p≤0.01. Being over cautious.

Qualitative data analysis

Content analysis

Content analysis is a research tool used to determine the presence of certain words, themes, or concepts within qualitative data. It could be analysis of texts, pictures, transcripts, diaries, media etc.  

Data can be placed in categories and counted (quantitative) or be  analysed in themes (qualitative). 

How to carry content analysis out 

The researcher will read and reread the text  

Then devise coding units they are interested in measuring e.g. frequency of sexist words  

Reread the text and tally every time that the coding unit occurs  

A second researcher is often used to check the consistency of the coding by comparing the outcome (inter-rater reliability)  

The correlation co-efficient needs to be 0.8+ 

Strengths 

It is a reliable way to analyse qualitative data as the coding units are not open to interpretation and so are applied in the same way over time and with different researchers (inter-rater reliability)  

 It allows a statistical analysis to be conducted and an objective conclusion to be drawn 

Limitations 

As it only describes the data (i.e. what is happening) it cannot extract any deeper meaning or explanation (i.e. the why).  

Causality cannot be established as it merely describes the data 

Thematic analysis

Thematic analysis is a type of content analysis where you let the themes emerge from your interpretation. The researcher does not have any pre-determined ideas of the themes that may emerge. Thematic analysis turns the data into higher order and subordinate themes. 

How to carry thematic analysis out 

Make transcriptions if needed  

Read and reread the data / transcript and try to understand the meaning communicated  

Use coding to initially analyse the transcripts  

Review the transcriptions/codes looking for emergent themes 

More meaning can be established compared to content analysis as it keeps the data as qualitative data  

   Content and thematic analysis have high ecological validity because it is based on observations of what people actually do. 

It is subjective. Bias reduces the objectivity of the findings because different observers may interpret the themes differently.  

 Analysis is also likely to be culture biased because the interpretation of verbal or written material will be affected by the language and culture of the observer and the themes identified. 

Scientific report writing

Writing a scientific report

Psychologists need to write up their research for publication in journal articles. They have to use a conventional format which includes the following:

Abstract – an overview of the whole report which is around 150-300 words and includes the introduction, methods, results and discussions. It is at the beginning of the report. 

Introduction – literature review (theories and studies), broad to specific, aims and hypotheses

Method – design, sample (including method), target population, materials, procedure (standardised instructions in a recipe style), briefing, debriefing, ethics

Results – descriptive stats, inferential stats (test, critical and calculated values, levels of significance), acceptance/ rejection of hypotheses or qualitative analysis

Discussion – Summarise the results, link back to previous research and theories, limitations of the research and future recommendations, real world application 

Referencing – cite all the literature that has informed the project

Referencing

Referencing is an important aspect of psychological reports/journals. The reference section of a journal includes full details of any sources, such as journal articles or books, that are used when writing a report. References ensures that work is not plagiarised.  There are many styles but Harvard style would be formatted as below:

Family name, INITIAL(S). Year. Title. Edition (if not first edition). Place of publication: Publisher.

Peer Review

Peer review is a process that takes place before a study is published to check the quality and validity of the research and to ensure that the research contributes to its field. 

The stages of peer review

Research is conducted

Research is written up and submitted for publication

Research is passed on to the journal editor, who reads the article and judges whether it fits in with the journal

Research is sent to a group of experts who evaluate its quality and either accept or reject it

If it is accepted, the researcher gets it back with recommendations

The editor then decides whether it is published or not

The purpose of peer review

Prevents dissemination of unwarranted claims / unacceptable interpretations / personal views and deliberate fraud – improves quality of research

Validates the quality of research. Ensures published research is taken seriously because it has been independently scrutinised

It suggests amendments or improvements. Increases probability of weaknesses / errors being identified – authors and researchers are less objective about their own work.

To allocate research funding – research is paid for by various government and charitable bodies. For example, the government-run Medical Research Council had £605 million to spend in 2008-09 and have a duty to spend this responsibly. They have a vested interest in establishing which research projects are most worthwhile.

Features of Science

Remember THE PROF

Theory construction: Psychologists make theories based on data collected from studies. They then update theories based on new evidences.

Hypothesis testing: Psychologists set hypotheses and collect data to test them. They control variables in experimental methods. The IV and DV should be operationalised (precisely defined and measured). Less extraneous variables can become confounding and cause and effect can be established which leads to high internal validity.

Empirical methods: These are methods that are observable and measurable. Observations must be made based on sensory experiences rather than simply describing ones own thoughts and beliefs. A scientific idea is one that has been subjected to empirical testing e.g. an experiment. Science therefore involves making predictions, tested by scientific empirical observations.

Paradigm:  According to Kuhn for a subject to be scientific there needs to be a least one shared paradign. It could be argued Psychology does not have one due to the many approaches offering different explanations. However, others may argue that the cognitive approach is a shared paradigm in Psychology.

Replicability: Replication is concerned with the ability to repeat an experiment. This can be ensured through a standardised procedure. This allows the reliability of the results to be checked.

Objectivity: The terms relates to a judgment, theory, explanation & findings based on observable phenomena (facts) and is uninfluenced by emotions or personal prejudices. It is important to remain objective to not allow research bias to threaten the validity of the results.

Falsifiability: Popper proposed falsifiability as an important feature of science. It is the principle that a theory can only be considered scientific if in principle it was possible to establish it as false.

Paradigms and paradigm shifts

A paradigm is a set of shared assumptions/beliefs about how behaviour/thought is studied/explained eg a focus on causal explanations of behaviour. A paradigm shift occurs where members of a scientific community change from one established way of explaining/studying a behaviour/thought to a new way, due to new/contradictory evidence. This shift leads to a ‘scientific revolution’ eg the cognitive revolution in the 1970s and the current emphasis on cognitive neuroscience.

Kuhn would argue Psychology is a pre-science as there is a range of views, a range of theories with no consensus and therefore it does not have an agreed paradigm. Not all people would agree with this statement as the cognitive approach is a main way of thinking in modern Psychology.

Theory construction

A theory is a set of general laws or principles which can explain and predict human behaviour. Theory construction enables predictions to be made which can be translated into hypotheses. Theories are tested by empirical methods and are refined in the light of evidence. This knowledge allows theory construction and testing to progress through the scientific cycle of enquiry.

Like what you see? Get in touch to learn more.

Experimental Method

The experimental method.

Experiments are one of the most popular and useful research methods in psychology. The key types are laboratory and field experiments.

Illustrative background for Role in psychology

Role in psychology

  • Experiments play a major role throughout psychology.
  • As a method, experiments allow one variable to be manipulated while keeping everything the same.
  • This allows researchers to show cause and effect.

Illustrative background for Laboratory experiments

Laboratory experiments

  • Some experiments take place under controlled condition, such as in a university room supervised by the researchers.
  • These are called laboratory (or ‘lab’) experiments.
  • The advantage of laboratory experiments is that they increase the level of control that a researcher can have.
  • But they reduce the level of ecological validity of the research.

Illustrative background for Field experiments

Field experiments

  • Other experiments take place in a participant’s natural surroundings, such as their school or workplace.
  • These are called field experiments.
  • The advantage of field experiments is that they increase the ecological validity of the study by making the surroundings more realistic.
  • But they reduce the level of control.

Illustrative background for True experiments

True experiments

  • Both field experiments and lab experiments control the variables under investigation, and randomly allocate participants to groups.
  • These characteristics mean that they are true experiments.

Quasi-Experiments

Quasi-experiments are not true experiments because they lack control over the experimental groups used.

Illustrative background for Lack of random allocation

Lack of random allocation

  • For example, if one of the variables under investigation is gender, people can’t be randomly allocated to ‘male’ and ‘female’ conditions.
  • A study is termed a quasi-experiment if it lacks random allocation to groups but is like a true experiment in most or all other ways.

Illustrative background for Examples of quasi-experiments

Examples of quasi-experiments

  • Other examples of quasi-experiments include studies which compare different types of personality (e.g. introverts versus extroverts) or compare people who have a psychological disorder with a control group who do not.
  • Such studies cannot randomly allocate people to groups.

Illustrative background for Quasi vs lab

Quasi vs lab

  • Quasi-experiments could take place in a lab, and all other aspects of the research and data gathering can be controlled.
  • This means they are easy to mix up with laboratory experiments.

Natural Experiments

Natural experiments are logically similar to true experiments, but the situation happens by itself and so is completely uncontrolled by the researcher.

Illustrative background for Ethics

  • For example, it wouldn’t be ethically correct to expose people to a lot of stress to investigate its effects.
  • In such situations, a researcher may use a natural experiment.

Illustrative background for Similarity to true experiments

Similarity to true experiments

  • For example, they could compare the educational outcomes of school pupils who experience a lot of stress versus those who do not.

Illustrative background for Differences to true experiments

Differences to true experiments

  • In contrast to a true experiment or a quasi-experiment, the variable under investigation happens by itself and so is completely uncontrolled by the researcher.
  • The researcher also has no control at all over who is in each ‘experimental’ group.

Illustrative background for Location of natural experiments

Location of natural experiments

  • Because natural experiments are not set up by the researcher, they always take place in participants’ everyday surroundings such as their home or school.
  • This means they are easy to mix up with field experiments.

1 Social Influence

1.1 Social Influence

1.1.1 Conformity

1.1.2 Asch (1951)

1.1.3 Sherif (1935)

1.1.4 Conformity to Social Roles

1.1.5 BBC Prison Study

1.1.6 End of Topic Test - Conformity

1.1.7 Obedience

1.1.8 Analysing Milgram's Experiment

1.1.9 Agentic State & Legitimate Authority

1.1.10 Variables of Obedience

1.1.11 Resistance to Social Influence

1.1.12 Minority Influence & Social Change

1.1.13 Minority Influence & Social Impact Theory

1.1.14 End of Topic Test - Social Influences

1.1.15 Exam-Style Question - Conformity

1.1.16 Top Grade AO2/AO3 - Social Influence

2.1.1 Multi-Store Model of Memory

2.1.2 Short-Term vs Long-Term Memory

2.1.3 Long-Term Memory

2.1.4 Support for the Multi-Store Model of Memory

2.1.5 Duration Studies

2.1.6 Capacity Studies

2.1.7 Coding Studies

2.1.8 The Working Memory Model

2.1.9 The Working Memory Model 2

2.1.10 Support for the Working Memory Model

2.1.11 Explanations for Forgetting

2.1.12 Studies on Interference

2.1.13 Cue-Dependent Forgetting

2.1.14 Eye Witness Testimony - Loftus & Palmer

2.1.15 Eye Witness Testimony Loftus

2.1.16 Eyewitness Testimony - Post-Event Discussion

2.1.17 Eyewitness Testimony - Age & Misleading Questions

2.1.18 Cognitive Interview

2.1.19 Cognitive Interview - Geiselman & Fisher

2.1.20 End of Topic Test - Memory

2.1.21 Exam-Style Question - Memory

2.1.22 A-A* (AO3/4) - Memory

3 Attachment

3.1 Attachment

3.1.1 Caregiver-Infant Interaction

3.1.2 Condon & Sander (1974)

3.1.3 Schaffer & Emerson (1964)

3.1.4 Multiple Attachments

3.1.5 Studies on the Role of the Father

3.1.6 Animal Studies of Attachment

3.1.7 Explanations of Attachment

3.1.8 Attachment Types - Strange Situation

3.1.9 Cultural Differences in Attachment

3.1.10 Disruption of Attachment

3.1.11 Disruption of Attachment - Privation

3.1.12 Overcoming the Effects of Disruption

3.1.13 The Effects of Institutionalisation

3.1.14 Early Attachment

3.1.15 Critical Period of Attachment

3.1.16 End of Topic Test - Attachment

3.1.17 Exam-Style Question - Attachment

3.1.18 Top Grade AO2/AO3 - Attachment

4 Psychopathology

4.1 Psychopathology

4.1.1 Definitions of Abnormality

4.1.2 Definitions of Abnormality 2

4.1.3 Phobias, Depression & OCD

4.1.4 Phobias: Behavioural Approach

4.1.5 Evaluation of Behavioural Explanations of Phobias

4.1.6 Depression: Cognitive Approach

4.1.7 OCD: Biological Approach

4.1.8 Evidence for the Biological Approach

4.1.9 End of Topic Test - Psychopathy

4.1.10 Exam-Style Question - Phobias

4.1.11 Top Grade AO2/AO3 - Psychopathology

5 Approaches in Psychology

5.1 Approaches in Psychology

5.1.1 Psychology as a Science

5.1.2 Origins of Psychology

5.1.3 Reductionism & Problems with Introspection

5.1.4 The Behaviourist Approach - Classical Conditioning

5.1.5 Pavlov's Experiment

5.1.6 Little Albert Study

5.1.7 The Behaviourist Approach - Operant Conditioning

5.1.8 Social Learning Theory

5.1.9 The Cognitive Approach 1

5.1.10 The Cognitive Approach 2

5.1.11 The Biological Approach

5.1.12 Gottesman (1991) - Twin Studies

5.1.13 Brain Scanning

5.1.14 Structure of Personality & Little Hans

5.1.15 The Psychodynamic Approach (A2 only)

5.1.16 Humanistic Psychology (A2 only)

5.1.17 Aronoff (1957) (A2 Only)

5.1.18 Rogers' Client-Centred Therapy (A2 only)

5.1.19 End of Topic Test - Approaches in Psychology

5.1.20 Exam-Style Question - Approaches in Psychology

5.2 Comparison of Approaches (A2 only)

5.2.1 Psychodynamic Approach

5.2.2 Cognitive Approach

5.2.3 Biological Approach

5.2.4 Behavioural Approach

5.2.5 End of Topic Test - Comparison of Approaches

6 Biopsychology

6.1 Biopsychology

6.1.1 Nervous System Divisions

6.1.2 Neuron Structure & Function

6.1.3 Neurotransmitters

6.1.4 Endocrine System Function

6.1.5 Fight or Flight Response

6.1.6 The Brain (A2 only)

6.1.7 Localisation of Brain Function (A2 only)

6.1.8 Studying the Brain (A2 only)

6.1.9 CIMT (A2 Only) & Postmortem Examinations

6.1.10 Biological Rhythms (A2 only)

6.1.11 Studies on Biological Rhythms (A2 Only)

6.1.12 End of Topic Test - Biopsychology

6.1.13 Top Grade AO2/AO3 - Biopsychology

7 Research Methods

7.1 Research Methods

7.1.1 Experimental Method

7.1.2 Observational Techniques

7.1.3 Covert, Overt & Controlled Observation

7.1.4 Self-Report Techniques

7.1.5 Correlations

7.1.6 Exam-Style Question - Research Methods

7.1.7 End of Topic Test - Research Methods

7.2 Scientific Processes

7.2.1 Aims, Hypotheses & Sampling

7.2.2 Pilot Studies & Design

7.2.3 Questionnaires

7.2.4 Variables & Control

7.2.5 Demand Characteristics & Investigator Effects

7.2.6 Ethics

7.2.7 Limitations of Ethical Guidelines

7.2.8 Consent & Protection from Harm Studies

7.2.9 Peer Review & The Economy

7.2.10 Validity (A2 only)

7.2.11 Reliability (A2 only)

7.2.12 Features of Science (A2 only)

7.2.13 Paradigms & Falsifiability (A2 only)

7.2.14 Scientific Report (A2 only)

7.2.15 Scientific Report 2 (A2 only)

7.2.16 End of Topic Test - Scientific Processes

7.3 Data Handling & Analysis

7.3.1 Types of Data

7.3.2 Descriptive Statistics

7.3.3 Correlation

7.3.4 Evaluation of Descriptive Statistics

7.3.5 Presentation & Display of Data

7.3.6 Levels of Measurement (A2 only)

7.3.7 Content Analysis (A2 only)

7.3.8 Case Studies (A2 only)

7.3.9 Thematic Analysis (A2 only)

7.3.10 End of Topic Test - Data Handling & Analysis

7.4 Inferential Testing

7.4.1 Introduction to Inferential Testing

7.4.2 Sign Test

7.4.3 Piaget Conservation Experiment

7.4.4 Non-Parametric Tests

8 Issues & Debates in Psychology (A2 only)

8.1 Issues & Debates in Psychology (A2 only)

8.1.1 Culture Bias

8.1.2 Sub-Culture Bias

8.1.3 Gender Bias

8.1.4 Ethnocentrism

8.1.5 Cross Cultural Research

8.1.6 Free Will & Determinism

8.1.7 Comparison of Free Will & Determinism

8.1.8 Reductionism & Holism

8.1.9 Reductionist & Holistic Approaches

8.1.10 Nature-Nurture Debate

8.1.11 Interactionist Approach

8.1.12 Nature-Nurture Methods

8.1.13 Nature-Nurture Approaches

8.1.14 Idiographic & Nomothetic Approaches

8.1.15 Socially Sensitive Research

8.1.16 End of Topic Test - Issues and Debates

9 Option 1: Relationships (A2 only)

9.1 Relationships: Sexual Relationships (A2 only)

9.1.1 Sexual Selection & Human Reproductive Behaviour

9.1.2 Intersexual & Intrasexual Selection

9.1.3 Evaluation of Sexual Selection Behaviour

9.1.4 Factors Affecting Attraction: Self-Disclosure

9.1.5 Evaluation of Self-Disclosure Theory

9.1.6 Self Disclosure in Computer Communication

9.1.7 Factors Affecting Attraction: Physical Attributes

9.1.8 Matching Hypothesis Studies

9.1.9 Factors Affecting Physical Attraction

9.1.10 Factors Affecting Attraction: Filter Theory 1

9.1.11 Factors Affecting Attraction: Filter Theory 2

9.1.12 Evaluation of Filter Theory

9.1.13 End of Topic Test - Sexual Relationships

9.2 Relationships: Romantic Relationships (A2 only)

9.2.1 Social Exchange Theory

9.2.2 Evaluation of Social Exchange Theory

9.2.3 Equity Theory

9.2.4 Evaluation of Equity Theory

9.2.5 Rusbult’s Investment Model

9.2.6 Evaluation of Rusbult's Investment Model

9.2.7 Relationship Breakdown

9.2.8 Studies on Relationship Breakdown

9.2.9 Evaluation of Relationship Breakdown

9.2.10 End of Topic Test - Romantic relationships

9.3 Relationships: Virtual & Parasocial (A2 only)

9.3.1 Virtual Relationships in Social Media

9.3.2 Evaluation of Reduced Cues & Hyperpersonal

9.3.3 Parasocial Relationships

9.3.4 Attachment Theory & Parasocial Relationships

9.3.5 Evaluation of Parasocial Relationship Theories

9.3.6 End of Topic Test - Virtual & Parasocial Realtions

10 Option 1: Gender (A2 only)

10.1 Gender (A2 only)

10.1.1 Sex, Gender & Androgyny

10.1.2 Gender Identity Disorder

10.1.3 Biological & Social Explanations of GID

10.1.4 Biological Influences on Gender

10.1.5 Effects of Hormones on Gender

10.1.6 End of Topic Test - Gender 1

10.1.7 Kohlberg’s Theory of Gender Constancy

10.1.8 Evaluation of Kohlberg's Theory

10.1.9 Gender Schema Theory

10.1.10 Psychodynamic Approach to Gender Development 1

10.1.11 Psychodynamic Approach to Gender Development 2

10.1.12 Social Approach to Gender Development

10.1.13 Criticisms of Social Theory

10.1.14 End of Topic Test - Gender 2

10.1.15 Media Influence on Gender Development

10.1.16 Cross Cultural Research

10.1.17 Childcare & Gender Roles

10.1.18 End of Topic Test - Gender 3

11 Option 1: Cognition & Development (A2 only)

11.1 Cognition & Development (A2 only)

11.1.1 Piaget’s Theory of Cognitive Development 1

11.1.2 Piaget's Theory of Cognitive Development 2

11.1.3 Schema Accommodation Assimilation & Equilibration

11.1.4 Piaget & Inhelder’s Three Mountains Task (1956)

11.1.5 Conservation & Class Inclusion

11.1.6 Evaluation of Piaget

11.1.7 End of Topic Test - Cognition & Development 1

11.1.8 Vygotsky

11.1.9 Evaluation of Vygotsky

11.1.10 Baillargeon

11.1.11 Baillargeon's studies

11.1.12 Evaluation of Baillargeon

11.1.13 End of Topic Test - Cognition & Development 2

11.1.14 Sense of Self & Theory of Mind

11.1.15 Baron-Cohen Studies

11.1.16 Selman’s Five Levels of Perspective Taking

11.1.17 Biological Basis of Social Cognition

11.1.18 Evaluation of Biological Basis of Social Cognition

11.1.19 Important Issues in Social Neuroscience

11.1.20 End of Topic Test - Cognition & Development 3

11.1.21 Top Grade AO2/AO3 - Cognition & Development

12 Option 2: Schizophrenia (A2 only)

12.1 Schizophrenia: Diagnosis (A2 only)

12.1.1 Classification & Diagnosis

12.1.2 Reliability & Validity of Diagnosis

12.1.3 Gender & Cultural Bias

12.1.4 Pinto (2017) & Copeland (1971)

12.1.5 End of Topic Test - Scizophrenia Diagnosis

12.2 Schizophrenia: Treatment (A2 only)

12.2.1 Family-Based Psychological Explanations

12.2.2 Evaluation of Family-Based Explanations

12.2.3 Cognitive Explanations

12.2.4 Drug Therapies

12.2.5 Evaluation of Drug Therapies

12.2.6 Biological Explanations for Schizophrenia

12.2.7 Dopamine Hypothesis

12.2.8 End of Topic Test - Schizoprenia Treatment 1

12.2.9 Psychological Therapies 1

12.2.10 Psychological Therapies 2

12.2.11 Evaluation of Psychological Therapies

12.2.12 Interactionist Approach - Diathesis-Stress Model

12.2.13 Interactionist Approach - Triggers & Treatment

12.2.14 Evaluation of the Interactionist Approach

12.2.15 End of Topic Test - Scizophrenia Treatments 2

13 Option 2: Eating Behaviour (A2 only)

13.1 Eating Behaviour (A2 only)

13.1.1 Explanations for Food Preferences

13.1.2 Birch et al (1987) & Lowe et al (2004)

13.1.3 Control of Eating Behaviours

13.1.4 Control of Eating Behaviour: Leptin

13.1.5 Biological Explanations for Anorexia Nervosa

13.1.6 Psychological Explanations: Family Systems Theory

13.1.7 Psychological Explanations: Social Learning Theory

13.1.8 Psychological Explanations: Cognitive Theory

13.1.9 Biological Explanations for Obesity

13.1.10 Biological Explanations: Studies

13.1.11 Psychological Explanations for Obesity

13.1.12 Psychological Explanations: Studies

13.1.13 End of Topic Test - Eating Behaviour

14 Option 2: Stress (A2 only)

14.1 Stress (A2 only)

14.1.1 Physiology of Stress

14.1.2 Role of Stress in Illness

14.1.3 Role of Stress in Illness: Studies

14.1.4 Social Readjustment Rating Scales

14.1.5 Hassles & Uplifts Scales

14.1.6 Stress, Workload & Control

14.1.7 Stress Level Studies

14.1.8 End of Topic Test - Stress 1

14.1.9 Physiological Measures of Stress

14.1.10 Individual Differences

14.1.11 Stress & Gender

14.1.12 Drug Therapy & Biofeedback for Stress

14.1.13 Stress Inoculation Therapy

14.1.14 Social Support & Stress

14.1.15 End of Topic Test - Stress 2

15 Option 3: Aggression (A2 only)

15.1 Aggression: Physiological (A2 only)

15.1.1 Neural Mechanisms

15.1.2 Serotonin

15.1.3 Hormonal Mechanisms

15.1.4 Genetic Factors

15.1.5 Genetic Factors 2

15.1.6 End of Topic Test - Aggression: Physiological 1

15.1.7 Ethological Explanation

15.1.8 Innate Releasing Mechanisms & Fixed Action Pattern

15.1.9 Evolutionary Explanations

15.1.10 Buss et al (1992) - Sex Differences in Jealousy

15.1.11 Evaluation of Evolutionary Explanations

15.1.12 End of Topic Test - Aggression: Physiological 2

15.2 Aggression: Social Psychological (A2 only)

15.2.1 Social Psychological Explanation

15.2.2 Buss (1963) - Frustration/Aggression

15.2.3 Social Psychological Explanation 2

15.2.4 Social Learning Theory (SLT) 1

15.2.5 Social Learning Theory (SLT) 2

15.2.6 Limitations of Social Learning Theory (SLT)

15.2.7 Deindividuation

15.2.8 Deindividuation 2

15.2.9 Deindividuation - Diener et al (1976)

15.2.10 End of Topic Test - Aggression: Social Psychology

15.2.11 Institutional Aggression: Prisons

15.2.12 Evaluation of Dispositional & Situational

15.2.13 Influence of Computer Games

15.2.14 Influence of Television

15.2.15 Evaluation of Studies on Media

15.2.16 Desensitisation & Disinhibition

15.2.17 Cognitive Priming

15.2.18 End of Topic Test - Aggression: Social Psychology

16 Option 3: Forensic Psychology (A2 only)

16.1 Forensic Psychology (A2 only)

16.1.1 Defining Crime

16.1.2 Measuring Crime

16.1.3 Offender Profiling

16.1.4 Evaluation of Offender Profiling

16.1.5 John Duffy Case Study

16.1.6 Biological Explanations 1

16.1.7 Biological Explanations 2

16.1.8 Evaluation of the Biological Explanation

16.1.9 Cognitive Explanations

16.1.10 Moral Reasoning

16.1.11 Psychodynamic Explanation 1

16.1.12 Psychodynamic Explanation 2

16.1.13 End of Topic Test - Forensic Psychology 1

16.1.14 Differential Association Theory

16.1.15 Custodial Sentencing

16.1.16 Effects of Prison

16.1.17 Evaluation of the Effects of Prison

16.1.18 Recidivism

16.1.19 Behavioural Treatments & Therapies

16.1.20 Effectiveness of Behavioural Treatments

16.1.21 Restorative Justice

16.1.22 End of Topic Test - Forensic Psychology 2

17 Option 3: Addiction (A2 only)

17.1 Addiction (A2 only)

17.1.1 Definition

17.1.2 Brain Neurochemistry Explanation

17.1.3 Learning Theory Explanation

17.1.4 Evaluation of a Learning Theory Explanation

17.1.5 Cognitive Bias

17.1.6 Griffiths on Cognitive Bias

17.1.7 Evaluation of Cognitive Theory (A2 only)

17.1.8 End of Topic Test - Addiction 1

17.1.9 Gambling Addiction & Learning Theory

17.1.10 Social Influences on Addiction 1

17.1.11 Social Influences on Addiction 2

17.1.12 Personal Influences on Addiction

17.1.13 Genetic Explanations of Addiction

17.1.14 End of Topic Test - Addiction 2

17.2 Treating Addiction (A2 only)

17.2.1 Drug Therapy

17.2.2 Behavioural Interventions

17.2.3 Cognitive Behavioural Therapy

17.2.4 Theory of Reasoned Action

17.2.5 Theory of Planned Behaviour

17.2.6 Six Stage Model of Behaviour Change

17.2.7 End of Topic Test - Treating Addiction

Jump to other topics

Go student ad image

Unlock your full potential with GoStudent tutoring

Affordable 1:1 tutoring from the comfort of your home

Tutors are matched to your specific learning needs

30+ school subjects covered

Top Grade AO2/AO3 - Biopsychology

Observational Techniques

Experimental Method ( AQA A Level Psychology )

Revision note.

Claire Neeson

Psychology Content Creator

Experimental Method

Laboratory experiments .

  • A lab experiment is a type of research method in which the researcher is able to exert high levels of control over what happens as part of the experimental process
  • The researcher controls the environmental factors, such as noise and temperature (possible extraneous variables ) so that the effects of the independent variable (IV) upon the dependent variable (DV) can be clearly observed and measured
  • The same number of participants take part in each condition of the IV
  • Each participant is given the same instructions (apart from instructions regarding the task as this will differ per condition as per the IV)
  • The same task/materials are used as far as is possible given the IV
  • Participants are given the same amount of time to complete the task per condition and across conditions if the IV allows it
  • All variables are kept the same/constant : only the independent variable changes between conditions
  • Keeping all variables constant means the DV can be measured exactly using quantitative data

Evaluation Points 

Cause and effect conclusions are more possible than other methods due to the control the researcher is able to exert 

Demand characteristics may be an issue as participants know they are in a study and so may alter their behaviour which impairs the validity of the study

The use of a standardised procedure means that the research is replicable which increases reliability

This method often lacks ecological validity due to the artificial nature of the procedure

High internal validity is achieved as the independent variable may be seen to affect the dependent variable without interference from extraneous variables

This method often lacks mundane realism meaning the results cannot be generalised to real-world behaviour

Field Experiments 

  • A field experiment is a research method which takes place in a natural setting, away from the lab
  • The researcher has less control over what happens as part of the experimental process
  • The researcher controls the environment to some extent but they have to allow the fact that many extraneous variables are included in field experiments
  • A confederate of the researcher pretends to collapse on a subway train: the IV is whether the victim appears to be drunk or disabled, the DV is the number of people who go to the victim’s aid
  • A researcher implements a ‘Kindness’ programme with half of the Year 5 students in a primary school: the IV is whether the students have followed the ‘Kindness’ programme or not, the DV is the score they achieve on a questionnaire about prosocial behaviour after one month
  • Interviews with passengers who witnessed the ‘victim’ collapsing on the train
  • Teachers’ observations of behavioural differences in the ‘Kindness’ programme children across the month of the study
  • Any qualitative data collected could be used to comment on the quantitative findings and shed light on the actions of the participants

Likely to have higher ecological validity as it is a real life setting 

Harder to randomly assign participants and so means it is more likely a change could happen due to participant variables, rather than what the researcher is measuring 

Participants are less likely to show demand characteristics as they are less likely to know what is expected from them and are often in their 'natural' environment

Harder to control extraneous variables within the experiment, which could change the measurement of the dependent variable

High levels of mundane realism, which means the results are more likely to be able to be generalised to real-world behaviours

 

Natural Experiments 

  • A natural experiment is a research method which does not manipulate the IV, it uses naturally-occurring phenomena , for example:
  • Age e.g. an experiment in which digit-span recall is tested between a group of young people compared to a group of older people
  • Gender e.g. the performance of girls is compared to the performance of boys in an experiment testing emotional intelligence
  • Circumstances e.g. a group of teachers from one school who have received training in empathy are compared to a group of teachers from another school who have not had this training on a task involving correctly identifying emotional states
  • The researcher has less control over what happens as part of the experimental process as they cannot randomly allocate participants to condition (the participants are the conditions e.g. either young/old, trained/untrained)
  • Natural experiments collect quantitative data 
Allow research in areas that controlled experiments could not conduct research, this could be due to ethical or cost reasons Difficult to say there is a cause and effect relationship as too many variables are unable to be controlled so could effect the outcome 
High external validity as they are conducted in a natural setting with natural behaviours being exhibited  Lack of reliability as incredibly unlikely to be able to replicate the same situation again to test 

You've read 0 of your 0 free revision notes

Get unlimited access.

to absolutely everything:

  • Downloadable PDFs
  • Unlimited Revision Notes
  • Topic Questions
  • Past Papers
  • Model Answers
  • Videos (Maths and Science)

Join the 100,000 + Students that ❤️ Save My Exams

the (exam) results speak for themselves:

Did this page help you?

Author: Claire Neeson

Claire has been teaching for 34 years, in the UK and overseas. She has taught GCSE, A-level and IB Psychology which has been a lot of fun and extremely exhausting! Claire is now a freelance Psychology teacher and content creator, producing textbooks, revision notes and (hopefully) exciting and interactive teaching materials for use in the classroom and for exam prep. Her passion (apart from Psychology of course) is roller skating and when she is not working (or watching 'Coronation Street') she can be found busting some impressive moves on her local roller rink.

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Sweepstakes
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

Introduction to Research Methods in Psychology

There are several different research methods in psychology , each of which can help researchers learn more about the way people think, feel, and behave. If you're a psychology student or just want to know the types of research in psychology, here are the main ones as well as how they work.

Three Main Types of Research in Psychology

stevecoleimages/Getty Images

Psychology research can usually be classified as one of three major types.

1. Causal or Experimental Research

When most people think of scientific experimentation, research on cause and effect is most often brought to mind. Experiments on causal relationships investigate the effect of one or more variables on one or more outcome variables. This type of research also determines if one variable causes another variable to occur or change.

An example of this type of research in psychology would be changing the length of a specific mental health treatment and measuring the effect on study participants.

2. Descriptive Research

Descriptive research seeks to depict what already exists in a group or population. Three types of psychology research utilizing this method are:

  • Case studies
  • Observational studies

An example of this psychology research method would be an opinion poll to determine which presidential candidate people plan to vote for in the next election. Descriptive studies don't try to measure the effect of a variable; they seek only to describe it.

3. Relational or Correlational Research

A study that investigates the connection between two or more variables is considered relational research. The variables compared are generally already present in the group or population.

For example, a study that looks at the proportion of males and females that would purchase either a classical CD or a jazz CD would be studying the relationship between gender and music preference.

Theory vs. Hypothesis in Psychology Research

People often confuse the terms theory and hypothesis or are not quite sure of the distinctions between the two concepts. If you're a psychology student, it's essential to understand what each term means, how they differ, and how they're used in psychology research.

A theory is a well-established principle that has been developed to explain some aspect of the natural world. A theory arises from repeated observation and testing and incorporates facts, laws, predictions, and tested hypotheses that are widely accepted.

A hypothesis is a specific, testable prediction about what you expect to happen in your study. For example, an experiment designed to look at the relationship between study habits and test anxiety might have a hypothesis that states, "We predict that students with better study habits will suffer less test anxiety." Unless your study is exploratory in nature, your hypothesis should always explain what you expect to happen during the course of your experiment or research.

While the terms are sometimes used interchangeably in everyday use, the difference between a theory and a hypothesis is important when studying experimental design.

Some other important distinctions to note include:

  • A theory predicts events in general terms, while a hypothesis makes a specific prediction about a specified set of circumstances.
  • A theory has been extensively tested and is generally accepted, while a hypothesis is a speculative guess that has yet to be tested.

The Effect of Time on Research Methods in Psychology

There are two types of time dimensions that can be used in designing a research study:

  • Cross-sectional research takes place at a single point in time. All tests, measures, or variables are administered to participants on one occasion. This type of research seeks to gather data on present conditions instead of looking at the effects of a variable over a period of time.
  • Longitudinal research is a study that takes place over a period of time. Data is first collected at the beginning of the study, and may then be gathered repeatedly throughout the length of the study. Some longitudinal studies may occur over a short period of time, such as a few days, while others may take place over a period of months, years, or even decades.

The effects of aging are often investigated using longitudinal research.

Causal Relationships Between Psychology Research Variables

What do we mean when we talk about a “relationship” between variables? In psychological research, we're referring to a connection between two or more factors that we can measure or systematically vary.

One of the most important distinctions to make when discussing the relationship between variables is the meaning of causation.

A causal relationship is when one variable causes a change in another variable. These types of relationships are investigated by experimental research to determine if changes in one variable actually result in changes in another variable.

Correlational Relationships Between Psychology Research Variables

A correlation is the measurement of the relationship between two variables. These variables already occur in the group or population and are not controlled by the experimenter.

  • A positive correlation is a direct relationship where, as the amount of one variable increases, the amount of a second variable also increases.
  • In a negative correlation , as the amount of one variable goes up, the levels of another variable go down.

In both types of correlation, there is no evidence or proof that changes in one variable cause changes in the other variable. A correlation simply indicates that there is a relationship between the two variables.

The most important concept is that correlation does not equal causation. Many popular media sources make the mistake of assuming that simply because two variables are related, a causal relationship exists.

Psychologists use descriptive, correlational, and experimental research designs to understand behavior . In:  Introduction to Psychology . Minneapolis, MN: University of Minnesota Libraries Publishing; 2010.

Caruana EJ, Roman M, Herandez-Sanchez J, Solli P. Longitudinal studies . Journal of Thoracic Disease. 2015;7(11):E537-E540. doi:10.3978/j.issn.2072-1439.2015.10.63

University of Berkeley. Science at multiple levels . Understanding Science 101 . Published 2012.

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

types of research methods psychology a level

Reference Library

Collections

  • See what's new
  • All Resources
  • Student Resources
  • Assessment Resources
  • Teaching Resources
  • CPD Courses
  • Livestreams

Study notes, videos, interactive activities and more!

Psychology news, insights and enrichment

Currated collections of free resources

Browse resources by topic

  • All Psychology Resources

Resource Selections

Currated lists of resources

Study Notes

Research Methods Key Term Glossary

Last updated 22 Mar 2021

  • Share on Facebook
  • Share on Twitter
  • Share by Email

This key term glossary provides brief definitions for the core terms and concepts covered in Research Methods for A Level Psychology.

Don't forget to also make full use of our research methods study notes and revision quizzes to support your studies and exam revision.

The researcher’s area of interest – what they are looking at (e.g. to investigate helping behaviour).

A graph that shows the data in the form of categories (e.g. behaviours observed) that the researcher wishes to compare.

Behavioural categories

Key behaviours or, collections of behaviour, that the researcher conducting the observation will pay attention to and record

In-depth investigation of a single person, group or event, where data are gathered from a variety of sources and by using several different methods (e.g. observations & interviews).

Closed questions

Questions where there are fixed choices of responses e.g. yes/no. They generate quantitative data

Co-variables

The variables investigated in a correlation

Concurrent validity

Comparing a new test with another test of the same thing to see if they produce similar results. If they do then the new test has concurrent validity

Confidentiality

Unless agreed beforehand, participants have the right to expect that all data collected during a research study will remain confidential and anonymous.

Confounding variable

An extraneous variable that varies systematically with the IV so we cannot be sure of the true source of the change to the DV

Content analysis

Technique used to analyse qualitative data which involves coding the written data into categories – converting qualitative data into quantitative data.

Control group

A group that is treated normally and gives us a measure of how people behave when they are not exposed to the experimental treatment (e.g. allowed to sleep normally).

Controlled observation

An observation study where the researchers control some variables - often takes place in laboratory setting

Correlational analysis

A mathematical technique where the researcher looks to see whether scores for two covariables are related

Counterbalancing

A way of trying to control for order effects in a repeated measures design, e.g. half the participants do condition A followed by B and the other half do B followed by A

Covert observation

Also known as an undisclosed observation as the participants do not know their behaviour is being observed

Critical value

The value that a test statistic must reach in order for the hypothesis to be accepted.

After completing the research, the true aim is revealed to the participant. Aim of debriefing = to return the person to the state s/he was in before they took part.

Involves misleading participants about the purpose of s study.

Demand characteristics

Occur when participants try to make sense of the research situation they are in and try to guess the purpose of the research or try to present themselves in a good way.

Dependent variable

The variable that is measured to tell you the outcome.

Descriptive statistics

Analysis of data that helps describe, show or summarize data in a meaningful way

Directional hypothesis

A one-tailed hypothesis that states the direction of the difference or relationship (e.g. boys are more helpful than girls).

Dispersion measure

A dispersion measure shows how a set of data is spread out, examples are the range and the standard deviation

Double blind control

Participants are not told the true purpose of the research and the experimenter is also blind to at least some aspects of the research design.

Ecological validity

The extent to which the findings of a research study are able to be generalized to real-life settings

Ethical guidelines

These are provided by the BPS - they are the ‘rules’ by which all psychologists should operate, including those carrying out research.

Ethical issues

There are 3 main ethical issues that occur in psychological research – deception, lack of informed consent and lack of protection of participants.

Evaluation apprehension

Participants’ behaviour is distorted as they fear being judged by observers

Event sampling

A target behaviour is identified and the observer records it every time it occurs

Experimental group

The group that received the experimental treatment (e.g. sleep deprivation)

External validity

Whether it is possible to generalise the results beyond the experimental setting.

Extraneous variable

Variables that if not controlled may affect the DV and provide a false impression than an IV has produced changes when it hasn’t.

Face validity

Simple way of assessing whether a test measures what it claims to measure which is concerned with face value – e.g. does an IQ test look like it tests intelligence.

Field experiment

An experiment that takes place in a natural setting where the experimenter manipulates the IV and measures the DV

A graph that is used for continuous data (e.g. test scores). There should be no space between the bars, because the data is continuous.

This is a formal statement or prediction of what the researcher expects to find. It needs to be testable.

Independent groups design

An experimental design where each participants only takes part in one condition of the IV

Independent variable

The variable that the experimenter manipulates (changes).

Inferential statistics

Inferential statistics are ways of analyzing data using statistical tests that allow the researcher to make conclusions about whether a hypothesis was supported by the results.

Informed consent

Psychologists should ensure that all participants are helped to understand fully all aspects of the research before they agree (give consent) to take part

Inter-observer reliability

The extent to which two or more observers are observing and recording behaviour in the same way

Internal validity

In relation to experiments, whether the results were due to the manipulation of the IV rather than other factors such as extraneous variables or demand characteristics.

Interval level data

Data measured in fixed units with equal distance between points on the scale

Investigator effects

These result from the effects of a researcher’s behaviour and characteristics on an investigation.

Laboratory experiment

An experiment that takes place in a controlled environment where the experimenter manipulates the IV and measures the DV

Matched pairs design

An experimental design where pairs of participants are matched on important characteristics and one member allocated to each condition of the IV

Measure of central tendency calculated by adding all the scores in a set of data together and dividing by the total number of scores

Measures of central tendency

A measurement of data that indicates where the middle of the information lies e.g. mean, median or mode

Measure of central tendency calculated by arranging scores in a set of data from lowest to highest and finding the middle score

Meta-analysis

A technique where rather than conducting new research with participants, the researchers examine the results of several studies that have already been conducted

Measure of central tendency which is the most frequently occurring score in a set of data

Natural experiment

An experiment where the change in the IV already exists rather than being manipulated by the experimenter

Naturalistic observation

An observation study conducted in the environment where the behaviour would normally occur

Negative correlation

A relationship exists between two covariables where as one increases, the other decreases

Nominal level data

Frequency count data that consists of the number of participants falling into categories. (e.g. 7 people passed their driving test first time, 6 didn’t).

Non-directional hypothesis

A two-tailed hypothesis that does not predict the direction of the difference or relationship (e.g. girls and boys are different in terms of helpfulness).

Normal distribution

An arrangement of a data that is symmetrical and forms a bell shaped pattern where the mean, median and mode all fall in the centre at the highest peak

Observed value

The value that you have obtained from conducting your statistical test

Observer bias

Occurs when the observers know the aims of the study study or the hypotheses and allow this knowledge to influence their observations

Open questions

Questions where there is no fixed response and participants can give any answer they like. They generate qualitative data.

Operationalising variables

This means clearly describing the variables (IV and DV) in terms of how they will be manipulated (IV) or measured (DV).

Opportunity sample

A sampling technique where participants are chosen because they are easily available

Order effects

Order effects can occur in a repeated measures design and refers to how the positioning of tasks influences the outcome e.g. practice effect or boredom effect on second task

Ordinal level data

Data that is capable of being out into rank order (e.g. places in a beauty contest, or ratings for attractiveness).

Overt observation

Also known as a disclosed observation as the participants given their permission for their behaviour to be observed

Participant observation

Observation study where the researcher actually joins the group or takes part in the situation they are observing.

Peer review

Before going to publication, a research report is sent other psychologists who are knowledgeable in the research topic for them to review the study, and check for any problems

Pilot study

A small scale study conducted to ensure the method will work according to plan. If it doesn’t then amendments can be made.

Positive correlation

A relationship exists between two covariables where as one increases, so does the other

Presumptive consent

Asking a group of people from the same target population as the sample whether they would agree to take part in such a study, if yes then presume the sample would

Primary data

Information that the researcher has collected him/herself for a specific purpose e.g. data from an experiment or observation

Prior general consent

Before participants are recruited they are asked whether they are prepared to take part in research where they might be deceived about the true purpose

Probability

How likely something is to happen – can be expressed as a number (0.5) or a percentage (50% change of tossing coin and getting a head)

Protection of participants

Participants should be protected from physical or mental health, including stress - risk of harm must be no greater than that to which they are exposed in everyday life

Qualitative data

Descriptive information that is expressed in words

Quantitative data

Information that can be measured and written down with numbers.

Quasi experiment

An experiment often conducted in controlled conditions where the IV simply exists so there can be no random allocation to the conditions

Questionnaire

A set of written questions that participants fill in themselves

Random sampling

A sampling technique where everyone in the target population has an equal chance of being selected

Randomisation

Refers to the practice of using chance methods (e.g. flipping a coin' to allocate participants to the conditions of an investigation

The distance between the lowest and the highest value in a set of scores.

A measure of dispersion which involves subtracting the lowest score from the highest score in a set of data

Reliability

Whether something is consistent. In the case of a study, whether it is replicable.

Repeated measures design

An experimental design where each participants takes part in both/all conditions of the IV

Representative sample

A sample that that closely matched the target population as a whole in terms of key variables and characteristics

Retrospective consent

Once the true nature of the research has been revealed, participants should be given the right to withdraw their data if they are not happy.

Right to withdraw

Participants should be aware that they can leave the study at any time, even if they have been paid to take part.

A group of people that are drawn from the target population to take part in a research investigation

Scattergram

Used to plot correlations where each pair of values is plotted against each other to see if there is a relationship between them.

Secondary data

Information that someone else has collected e.g. the work of other psychologists or government statistics

Semi-structured interview

Interview that has some pre-determined questions, but the interviewer can develop others in response to answers given by the participant

A statistical test used to analyse the direction of differences of scores between the same or matched pairs of subjects under two experimental conditions

Significance

If the result of a statistical test is significant it is highly unlikely to have occurred by chance

Single-blind control

Participants are not told the true purpose of the research

Skewed distribution

An arrangement of data that is not symmetrical as data is clustered ro one end of the distribution

Social desirability bias

Participants’ behaviour is distorted as they modify this in order to be seen in a positive light.

Standard deviation

A measure of the average spread of scores around the mean. The greater the standard deviation the more spread out the scores are. .

Standardised instructions

The instructions given to each participant are kept identical – to help prevent experimenter bias.

Standardised procedures

In every step of the research all the participants are treated in exactly the same way and so all have the same experience.

Stratified sample

A sampling technique where groups of participants are selected in proportion to their frequency in the target population

Structured interview

Interview where the questions are fixed and the interviewer reads them out and records the responses

Structured observation

An observation study using predetermined coding scheme to record the participants' behaviour

Systematic sample

A sampling technique where every nth person in a list of the target population is selected

Target population

The group that the researchers draws the sample from and wants to be able to generalise the findings to

Temporal validity

Refers to how likely it is that the time period when a study was conducted has influenced the findings and whether they can be generalised to other periods in time

Test-retest reliability

Involves presenting the same participants with the same test or questionnaire on two separate occasions and seeing whether there is a positive correlation between the two

Thematic analysis

A method for analysing qualitative data which involves identifying, analysing and reporting patterns within the data

Time sampling

A way of sampling the behaviour that is being observed by recording what happens in a series of fixed time intervals.

Type 1 error

Is a false positive. It is where you accept the alternative/experimental hypothesis when it is false

Type 2 error

Is a false negative. It is where you accept the null hypothesis when it is false

Unstructured interview

Also know as a clinical interview, there are no fixed questions just general aims and it is more like a conversation

Unstructured observation

Observation where there is no checklist so every behaviour seen is written down in an much detail as possible

Whether something is true – measures what it sets out to measure.

Volunteer sample

A sampling technique where participants put themselves forward to take part in research, often by answering an advertisement

You might also like

Explanations for conformity, cultural variations in attachment, emergence of psychology as a science: the laboratory experiment, scoville and milner (1957), kohlberg (1968), schizophrenia: what is schizophrenia, biopsychology: the pns – somatic and autonomic nervous systems, relationships: duck's phase model of relationship breakdown, our subjects.

  • › Criminology
  • › Economics
  • › Geography
  • › Health & Social Care
  • › Psychology
  • › Sociology
  • › Teaching & learning resources
  • › Student revision workshops
  • › Online student courses
  • › CPD for teachers
  • › Livestreams
  • › Teaching jobs

Boston House, 214 High Street, Boston Spa, West Yorkshire, LS23 6AD Tel: 01937 848885

  • › Contact us
  • › Terms of use
  • › Privacy & cookies

© 2002-2024 Tutor2u Limited. Company Reg no: 04489574. VAT reg no 816865400.

American Psychological Association Logo

APA Handbook of Research Methods in Psychology

Available formats.

  • Table of contents
  • Contributor bios
  • Book details

With significant new and updated content across dozens of chapters, this second edition  presents the most exhaustive treatment available of the techniques psychologists and others have developed to help them pursue a shared understanding of why humans think, feel, and behave the way they do.

The initial chapters in this indispensable three-volume handbook address broad, crosscutting issues faced by researchers: the philosophical, ethical, and societal underpinnings of psychological research. Next, chapters detail the research planning process, describe the range of measurement techniques that psychologists most often use to collect data, consider how to determine the best measurement techniques for a particular purpose, and examine ways to assess the trustworthiness of measures.

Additional chapters cover various aspects of quantitative, qualitative, neuropsychological, and biological research designs, presenting an array of options and their nuanced distinctions. Chapters on techniques for data analysis follow, and important issues in writing up research to share with the community of psychologists are discussed in the handbook’s concluding chapters.

Among the newly written chapters in the second edition, the handbook’s stellar roster of authors cover literature searching, workflow and reproducibility, research funding, neuroimaging, various facets of a wide range of research designs and data analysis methods, and updated information on the publication process, including research data management and sharing, questionable practices in statistical analysis, and ethical issues in manuscript preparation and authorship.

Volume 1. Foundations, Planning, Measures, and Psychometrics

Editorial Board

About the Editors

Contributors

A Note from the Publisher

Introduction: Objectives of Psychological Research and Their Relations to Research Methods

Part I. Philosophical, Ethical, and Societal Underpinnings of Psychological Research

  • Chapter 1. Perspectives on the Epistemological Bases for Qualitative Research Carla Willig
  • Chapter 2. Frameworks for Causal Inference in Psychological Science Peter M. Steiner, William R. Shadish, and Kristynn J. Sullivan
  • Chapter 3. Ethics in Psychological Research: Guidelines and Regulations Adam L. Fried and Kate L. Jansen
  • Chapter 4. Ethics and Regulation of Research With Nonhuman Animals Sangeeta Panicker, Chana K. Akins, and Beth Ann Rice
  • Chapter 5. Cross-Cultural Research Methods David Masumoto and Fons J. R. van de Vijver
  • Chapter 6.Research With Populations that Experience Marginalization George P. Knight, Rebecca M. B. White, Stefanie Martinez-Fuentes, Mark W. Roosa, and Adriana J. Umaña-Taylor

Part II. Planning Research

  • Chapter 7. Developing Testable and Important Research Questions Frederick T. L. Leong, Neal Schmitt, and Brent J. Lyons
  • Chapter 8. Searching With a Purpose: How to Use Literature Searching to Support Your Research Diana Ramirez and Margaret J. Foster
  • Chapter 9. Psychological Measurement: Scaling and Analysis Heather Hayes and Susan E. Embretson
  • Chapter 10. Sample Size Planning Ken Kelley, Samantha F. Anderson, and Scott E. Maxwell
  • Chapter 11. Workflow and Reproducibility Oliver Kirchkamp
  • Chapter 12. Obtaining and Evaluating Research Funding Jonathan S. Comer and Amanda L. Sanchez

Part III. Measurement Methods

  • Chapter 13. Behavioral Observation Roger Bakeman and Vicenç Quera
  • Chapter 14. Question Order Effects Lisa Lee, Parvati Krishnamurty, and Struther Van Horn
  • Chapter 15. Interviews and Interviewing Techniques Anna Madill
  • Chapter 16. Using Intensive Longitudinal Methods in Psychological Research Masumi Iida, Patrick E. Shrout, Jean-Philippe Laurenceau, and Niall Bolger
  • Chapter 17. Automated Analyses of Natural Language in Psychological Research Laura K. Allen, Arthur C. Graesser, and Danielle S. McNamara
  • Chapter 18. Objective Tests as Instruments of Psychological Theory and Research David Watson
  • Chapter 19. Norm- and Criterion-Referenced Testing Kurt F. Geisinger
  • Chapter 20. The Current Status of "Projective" "Tests" Robert E. McGrath, Alec Twibell, and Elizabeth J. Carroll
  • Chapter 21. Brief Instruments and Short Forms Emily A. Atkinson, Carolyn M. Pearson Carter, Jessica L. Combs Rohr, and Gregory T. Smith
  • Chapter 22. Eye Movements, Pupillometry, and Cognitive Processes Simon P. Liversedge, Sara V. Milledge, and Hazel I. Blythe
  • Chapter 23. Response Times Roger Ratcliff
  • Chapter 24. Psychophysics: Concepts, Methods, and Frontiers Allie C. Hexley, Takuma Morimoto, and Manuel Spitschan
  • Chapter 25. The Perimetric Physiological Measurement of Psychological Constructs Louis G. Tassinary, Ursula Hess, Luis M. Carcoba, and Joseph M. Orr
  • Chapter 26. Salivary Hormone Assays Linda Becker, Nicholas Rohleder, and Oliver C. Schultheiss
  • Chapter 27. Electro- and Magnetoencephalographic Methods in Psychology Eddie Harmon-Jones, David M. Amodio, Philip A. Gable, and Suzanne Dikker
  • Chapter 28. Event-Related Potentials Steven J. Luck
  • Chapter 29. Functional Neuroimaging Megan T. deBettencourt, Wilma A. Bainbridge, Monica D. Rosenberg
  • Chapter 30. Noninvasive Stimulation of the Cerebral Cortex Dennis J. L. G. Schutter
  • Chapter 31. Combined Neuroimaging Methods Marius Moisa and Christian C. Ruff
  • Chapter 32. Neuroimaging Analysis Methods Yanyu Xiong and Sharlene D. Newman

Part IV. Psychometrics

  • Chapter 33. Reliability Sean P. Lane, Elizabeth N. Aslinger, and Patrick E. Shrout
  • Chapter 34. Generalizability Theory Xiaohong Gao and Deborah J. Harris
  • Chapter 35. Construct Validity Kevin J. Grimm and Keith F. Widaman
  • Chapter 36. Item-Level Factor Nisha C. Gottfredson, Brian D. Stucky, and A. T. Panter
  • Chapter 37. Item Response Theory Steven P. Reise and Tyler M. Moore
  • Chapter 38. Measuring Test Performance With Signal Detection Theory Techniques Teresa A. Treat and Richard J. Viken

Volume 2. Research Designs: Quantitative, Qualitative, Neuropsychological, and Biological

Part I. Qualitative Research Methods

  • Chapter 1. Developments in Qualitative Inquiry Sarah Riley and Andrea LaMarre
  • Chapter 2. Metasynthesis of Qualitative Research Sally Thorne
  • Chapter 3. Grounded Theory and Psychological Research Robert Thornberg, Elaine Keane, and Malgorzata Wójcik
  • Chapter 4. Thematic Analysis Virginia Braun and Victoria Clarke
  • Chapter 5. Phenomenological Methodology, Methods, and Procedures for Research in Psychology Frederick J. Wertz
  • Chapter 6. Narrative Analysis Javier Monforte and Brett Smith
  • Chapter 7. Ethnomethodology and Conversation Analysis Paul ten Have
  • Chapter 8. Discourse Analysis and Discursive Psychology Chris McVittie and Andy McKinlay
  • Chapter 9. Ethnography in Psychological Research Elizabeth Fein and Jonathan Yahalom
  • Chapter 10. Visual Research in Psychology Paula Reavey, Jon Prosser, and Steven D. Brown
  • Chapter 11. Researching the Temporal Karen Henwood and Fiona Shirani

Part II. Working Across Epistemologies, Methodologies, and Methods

  • Chapter 12. Mixed Methods Research in Psychology Timothy C. Guetterman and Analay Perez
  • Chapter 13. The "Cases Within Trials" (CWT) Method: An Example of a Mixed-Methods Research Design Daniel B. Fishman
  • Chapter 14. Researching With American Indian and Alaska Native Communities: Pursuing Partnerships for Psychological Inquiry in Service to Indigenous Futurity Joseph P. Gone
  • Chapter 15. Participatory Action Research as Movement Toward Radical Relationality, Epistemic Justice, and Transformative Intervention: A Multivocal Reflection Urmitapa Dutta, Jesica Siham Fernández, Anne Galletta, and Regina Day Langhout

Part III. Sampling Across People and Time

  • Chapter 16. Introduction to Survey Sampling Roger Tourangeau and Ting Yan
  • Chapter 17. Epidemiology Rumi Kato Price and Heidi H. Tastet
  • Chapter 18. Collecting Longitudinal Data: Present Issues and Future Challenges Simran K. Johal, Rohit Batra, and Emilio Ferrer
  • Chapter 19. Using the Internet to Collect Data Ulf-Dietrich Reips

Part IV. Building and Testing Models

  • Chapter 20. Statistical Mediation Analysis David P. MacKinnon, Jeewon Cheong, Angela G. Pirlott, and Heather L. Smyth
  • Chapter 21. Structural Equation Modeling with Latent Variables Rick H. Hoyle and Nisha C. Gottfredson
  • Chapter 22. Mathematical Psychology Parker Smith, Yanjun Liu, James T. Townsend, and Trisha Van Zandt
  • Chapter 23. Computational Modeling Adele Diederich
  • Chapter 24. Fundamentals of Bootstrapping and Monte Carlo Methods William Howard Beasley, Patrick O'Keefe, and Joseph Lee Rodgers
  • Chapter 25. Designing Simulation Studies Xitao Fan
  • Chapter 26. Bayesian Modeling for Psychologists: An Applied Approach Fred M. Feinberg and Richard Gonzalez

Part V. Designs Involving Experimental Manipulations

  • Chapter 27. Randomized Designs in Psychological Research Larry Christensen, Lisa A. Turner, and R. Burke Johnson
  • Chapter 28. Nonequivalent Comparison Group Designs Henry May and Zachary K. Collier
  • Chapter 29. Regression Discontinuity Designs Charles S. Reichardt and Gary T. Henry
  • Chapter 30. Treatment Validity for Intervention Studies Dianne L. Chambless and Steven D. Hollon
  • Chapter 31. Translational Research Michael T. Bardo, Christopher Cappelli, and Mary Ann Pentz
  • Chapter 32. Program Evaluation: Outcomes and Costs of Putting Psychology to Work Brian T. Yates

Part VI. Quantitative Research Designs Involving Single Participants or Units

  • Chapter 33. Single-Case Experimental Design John M. Ferron, Megan Kirby, and Lodi Lipien
  • Chapter 34. Time Series Designs Bradley J. Bartos, Richard McCleary, and David McDowall

Part VII. Designs in Neuropsychology and Biological Psychology

  • Chapter 35. Case Studies in Neuropsychology Randi C. Martin, Simon Fischer-Baum, and Corinne M. Pettigrew
  • Chapter 36. Group Studies in Experimental Neuropsychology Avinash R Vaidya, Maia Pujara, and Lesley K. Fellows
  • Chapter 37. Genetic Methods in Psychology Terrell A. Hicks, Daniel Bustamante, Karestan C. Koenen, Nicole R. Nugent, and Ananda B. Amstadter
  • Chapter 38. Human Genetic Epidemiology Floris Huider, Lannie Ligthart, Yuri Milaneschi, Brenda W. J. H. Penninx, and Dorret I. Boomsma

Volume 3. Data Analysis and Research Publication

Part I. Quantitative Data Analysis

  • Chapter 1. Methods for Dealing With Bad Data and Inadequate Models: Distributions, Linear Models, and Beyond Rand R. Wilcox and Guillaume A. Rousselet
  • Chapter 2. Maximum Likelihood and Multiple Imputation Missing Data Handling: How They Work, and How to Make Them Work in Practice Timothy Hayes and Craig K. Enders
  • Chapter 3. Exploratory Data Analysis Paul F. Velleman and David C. Hoaglin
  • Chapter 4. Graphic Displays of Data Leland Wilkinson
  • Chapter 5. Estimating and Visualizing Interactions in Moderated Multiple Regression Connor J. McCabe and Kevin M. King
  • Chapter 6. Effect Size Estimation Michael Borenstein
  • Chapter 7. Measures of Clinically Significant Change Russell J. Bailey, Benjamin M. Ogles, and Michael J. Lambert
  • Chapter 8. Analysis of Variance and the General Linear Model James Jaccard and Ai Bo
  • Chapter 9. Generalized Linear Models David Rindskopf
  • Chapter 10. Multilevel Modeling for Psychologists John B. Nezlek
  • Chapter 11. Longitudinal Data Analysis Andrew K. Littlefield
  • Chapter 12. Event History Analysis Fetene B. Tekle and Jeroen K. Vermunt
  • Chapter 13. Latent State-Trait Models Rolf Steyer, Christian Geiser, and Christiane Loß​nitzer
  • Chapter 14. Latent Variable Modeling of Continuous Growth David A. Cole, Jeffrey A. Ciesla, and Qimin Liu
  • Chapter 15. Dynamical Systems and Differential Equation Models of Change Steven M. Boker and Robert G. Moulder
  • Chapter 16. A Multivariate Growth Curve Model for Three-Level Data Patrick J. Curran, Chris L. Strauss, Ethan M. McCormick, and James S. McGinley
  • Chapter 17. Exploratory Factor Analysis and Confirmatory Factor Analysis Keith F. Widaman and Jonathan Lee Helm
  • Chapter 18. Latent Class and Latent Profile Models Brian P. Flaherty, Liying Wang, and Cara J. Kiff
  • Chapter 19. Decision Trees and Ensemble Methods in the Behavioral Sciences Kevin J. Grimm, Ross Jacobucci, and John J. McArdle
  • Chapter 20. Using the Social Relations Model to Understand Interpersonal Perception and Behavior P. Niels Christensen, Deborah A. Kashy, and Katelin E. Leahy
  • Chapter 21. Dyadic Data Analysis Richard Gonzalez and Dale Griffin
  • Chapter 22. The Data of Others: New and Old Faces of Archival Research Sophie Pychlau and David T. Wagner
  • Chapter 23. Social Network Analysis in Psychology: Recent Breakthroughs in Methods and Theories Wei Wang, Tobias Stark, James D. Westaby, Adam K. Parr, and Daniel A. Newman
  • Chapter 24. Meta-Analysis Jeffrey C. Valentine, Therese D. Pigott, and Joseph Morris

Part II. Publishing and the Publication Process

  • Chapter 25. Research Data Management and Sharing Katherine G. Akers and John A. Borghi
  • Chapter 26. Questionable Practices in Statistical Analysis Rex B. Kline
  • Chapter 27. Ethical Issues in Manuscript Preparation and Authorship Jennifer Crocker

Harris Cooper, PhD, is the Hugo L. Blomquist professor, emeritus, in the Department of Psychology and Neuroscience at Duke University. His research interests concern research synthesis and research methodology, and he also studies the application of social and developmental psychology to education policy. His book Research Synthesis and Meta-Analysis: A Step-by-Step Approach (2017) is in its fifth edition. He is the coeditor of the Handbook of Research Synthesis and Meta-Analysis (3 rd ed. 2019).

In 2007, Dr. Cooper was the recipient of the Frederick Mosteller Award for Contributions to Research Synthesis Methodology, and in 2008 he received the Ingram Olkin Award for Distinguished Lifetime Contribution to Research Synthesis from the Society for Research Synthesis Methodology.

He served as the chair of the Department of Psychology and Neuroscience at Duke University from 2009 to 2014, and from 2017 to 2018 he served as the dean of social science at Duke. Dr. Cooper chaired the first APA committee that developed guidelines for information about research that should be included in manuscripts submitted to APA journals. He currently serves as the editor of American Psychologist, the flagship journal of APA.

Marc N. Coutanche, PhD, is an associate professor of psychology and research scientist in the Learning Research and Development Center at the University of Pittsburgh. Dr. Coutanche directs a program of cognitive neuroscience research and develops and tests new computational techniques to identify and understand the neural information present within neuroimaging data.

His work has been funded by the National Institutes of Health, National Science Foundation, American Psychological Foundation, and other organizations, and he has published in a variety of journals.

Dr. Coutanche received his PhD from the University of Pennsylvania, and conducted postdoctoral training at Yale University. He received a Howard Hughes Medical Institute International Student Research Fellowship and Ruth L. Kirschstein Postdoctoral National Research Service Award, and was named a 2019 Rising Star by the Association for Psychological Science.

Linda M. McMullen, PhD, is professor emerita of psychology at the University of Saskatchewan, Canada. Over her career, she has contributed to the development of qualitative inquiry in psychology through teaching, curriculum development, and pedagogical scholarship; original research; and service to the qualitative research community.

Dr. McMullen introduced qualitative inquiry into both the graduate and undergraduate curriculum in her home department, taught courses at both levels for many years, and has published articles, coedited special issues, and written a book ( Essentials of Discursive Psychology ) that is part of APA’s series on qualitative methodologies, among other works. She has been engaged with building the Society for Qualitative Inquiry in Psychology (SQIP; a section of Division 5 of the APA) into a vibrant scholarly society since its earliest days, and took on many leadership roles while working as a university professor.

Dr. McMullen’s contributions have been recognized by Division 5 of the APA, the Canadian Psychological Association, and the Saskatchewan Psychological Association.

Abigail Panter, PhD, is the senior associate dean for undergraduate education and a professor of psychology in the L. L. Thurstone Psychometric Laboratory at University of North Carolina at Chapel Hill. She is past president of APA’s Division 5, Quantitative and Qualitative Methods.

As a quantitative psychologist, she develops instruments, research designs and data-analytic strategies for applied research questions in higher education, personality, and health. She serves as a program evaluator for UNC’s Chancellor’s Science Scholars Program, and was also principal investigator for The Finish Line Project, a $3 million grant from the U.S. Department of Education that systematically investigated new supports and academic initiatives, especially for first-generation college students.

Her books include the  APA Dictionary of Statistics and Research Methods  (2014), the APA Handbook of Research Methods in Psychology  (first edition; 2012), the Handbook of Ethics in Quantitative Methodology  (2011), and the SAGE Handbook of Methods in Social Psychology (2004), among others.

David Rindskopf, PhD, is distinguished professor at the City University of New York Graduate Center, specializing in research methodology and statistics. His main interests are in Bayesian statistics, causal inference, categorical data analysis, meta-analysis, and latent variable models.

He is a fellow of the American Statistical Association and the American Educational Research Association, and is past president of the Society of Multivariate Experimental Psychology and the New York Chapter of the American Statistical Association.

Kenneth J. Sher, PhD, is chancellor’s professor and curators’ distinguished professor of psychological sciences, emeritus, at the University of Missouri. He received his PhD in clinical psychology from Indiana University (1980) and his clinical internship training at Brown University (1981).

His primary areas of research focus on etiological processes in the development of alcohol dependence, factors that affect the course of drinking and alcohol use disorders throughout adulthood, longitudinal research methodology, psychiatric comorbidity, and nosology. At the University of Missouri he directed the predoctoral and postdoctoral training program in alcohol studies, and his research has been continually funded by the National Institute on Alcohol Abuse and Alcoholism for more than 35 years.

Dr. Sher’s research contributions have been recognized by professional societies including the Research Society on Alcoholism and APA, and throughout his career, he has been heavily involved in service to professional societies and scholarly publications.

You may also like

How to Mix Methods

IMAGES

  1. Research Methods

    types of research methods psychology a level

  2. Research Methods

    types of research methods psychology a level

  3. Research methods

    types of research methods psychology a level

  4. Research Methods Psychology A Level Revision Topic Notes for AQA AS and

    types of research methods psychology a level

  5. PPT

    types of research methods psychology a level

  6. AQA A-Level Psychology: Paper 2

    types of research methods psychology a level

VIDEO

  1. AQA Psychology A Level Past Papers

  2. Features of psychology as a science

  3. Business Research Methods Unit 2nd Complete Revision || BRM unit 2 Research Design MBA 2nd semester

  4. Sample Design

  5. Research methods in Psychology

  6. Research Methods: Sampling

COMMENTS

  1. Research Methods In Psychology

    Research methods in psychology are systematic procedures used to observe, describe, predict, and explain behavior and mental processes. They include experiments, surveys, case studies, and naturalistic observations, ensuring data collection is objective and reliable to understand and explain psychological phenomena.

  2. Overview

    Overview - Research Methods Research methods are how psychologists and scientists come up with and test their theories. The A level psychology syllabus covers several different types of studies and experiments used in psychology as well as how these studies are conducted and reported: Types of psychological studies (including experiments, observations, self-reporting, and case studies ...

  3. Research Methods study and revision guide

    AQA A-level Psychology syllabus notes, revision notes and model answers for the unit 2 Research Methods topic. The best way to revise Psychology A-level.

  4. Types of Experiment

    If you want to improve your psychological knowledge in a way that is more fun than just studying and trying to memorise, I recommend reading a popular scienc...

  5. Experimental Method In Psychology

    There are three types of experiments you need to know: 1. Lab Experiment. A laboratory experiment in psychology is a research method in which the experimenter manipulates one or more independent variables and measures the effects on the dependent variable under controlled conditions. A laboratory experiment is conducted under highly controlled ...

  6. Experimental Design: Types, Examples & Methods

    Experimental design refers to how participants are allocated to different groups in an experiment. Types of design include repeated measures, independent groups, and matched pairs designs.

  7. AQA A-level Psychology Research Methods

    Research Methods. This section provides revision resources for AQA A-level psychology and the Research Methods chapter. The revision notes cover the AQA exam board and the new specification. As part of your A-level psychology course, you need to know the following topics below within this chapter: First Name. Email Address.

  8. Research Methods: Scientific Method & Techniques

    Everything you need to know about Research Methods: Scientific Method & Techniques for the A Level Psychology AQA exam, totally free, with assessment questions, text & videos.

  9. Research Methods: Techniques

    Everything you need to know about Research Methods: Techniques for the A Level Psychology AQA exam, totally free, with assessment questions, text & videos.

  10. Research Methods

    Observations A researcher will simply observe behaviour of a sample and look for patterns. Like all non-experimental methods, in an observation we cannot draw cause and effect relationships. Observations are used in psychological research in one of two ways, a method or a technique.

  11. Revision Help: Research Methods for A-Level Psychology

    Use these revision materials to help with your studies of Research Methods for A-Level Psychology.

  12. Experimental Method

    Experiments are one of the most popular and useful research methods in psychology. The key types are laboratory and field experiments.

  13. 7.1.1 Experimental Method

    A field experiment is a research method which takes place in a natural setting, away from the lab. The researcher has less control over what happens as part of the experimental process. The researcher controls the environment to some extent but they have to allow the fact that many extraneous variables are included in field experiments.

  14. Introduction to Research Methods in Psychology

    Research methods in psychology range from simple to complex. Learn more about the different types of research in psychology, as well as examples of how they're used.

  15. Types of Experiment: Overview

    Different types of methods are used in research, which loosely fall into 1 of 2 categories. Experimental (Laboratory, Field & Natural) & Non experimental ( correlations, observations, interviews, questionnaires and case studies).

  16. Types of Experiments

    What are the three different types of experiments, and how do they differ from each other? In this AQA A Level Psychology revision video, we explore the thre...

  17. Experimental Design

    Experimental design describes the way participants are allocated to experimental groups of an investigation. Types of design include Repeated Measures, Independent Groups, and Matched Pairs designs.

  18. PDF APA Handbook of Research Methods in Psychology: Research Designs

    APA Handbooks in Psychology® Series APA Addiction Syndrome Handbook—two volumes Howard J. Shaffer, Editor-in-Chief APA Educational Psychology Handbook—three volumes Karen R. Harris, Steve Graham, and Tim Urdan, Editors-in-Chief APA Handbook of Adolescent and Young Adult Development—one volume Lisa J. Crockett, Gustavo Carlo, and John E. Schulenberg, Editors APA Handbook of Behavior ...

  19. Research Methods Key Term Glossary

    This key term glossary provides brief definitions for the core terms and concepts covered in Research Methods for A Level Psychology.

  20. APA Handbook of Research Methods in Psychology

    Additional chapters cover various aspects of quantitative, qualitative, neuropsychological, and biological research designs, presenting an array of options and their nuanced distinctions. Chapters on techniques for data analysis follow, and important issues in writing up research to share with the community of psychologists are discussed in the handbook's concluding chapters.