JMP | Statistical Discovery.™ From SAS.
Statistics Knowledge Portal
A free online introduction to statistics
The One-Sample t -Test
What is the one-sample t -test.
The one-sample t-test is a statistical hypothesis test used to determine whether an unknown population mean is different from a specific value.
When can I use the test?
You can use the test for continuous data. Your data should be a random sample from a normal population.
What if my data isn’t nearly normally distributed?
If your sample sizes are very small, you might not be able to test for normality. You might need to rely on your understanding of the data. When you cannot safely assume normality, you can perform a nonparametric test that doesn’t assume normality.
Using the one-sample t -test
See how to perform a one-sample t -test using statistical software.
- Download JMP to follow along using the sample data included with the software.
- To see more JMP tutorials, visit the JMP Learning Library .
The sections below discuss what we need for the test, checking our data, performing the test, understanding test results and statistical details.
What do we need?
For the one-sample t -test, we need one variable.
We also have an idea, or hypothesis, that the mean of the population has some value. Here are two examples:
- A hospital has a random sample of cholesterol measurements for men. These patients were seen for issues other than cholesterol. They were not taking any medications for high cholesterol. The hospital wants to know if the unknown mean cholesterol for patients is different from a goal level of 200 mg.
- We measure the grams of protein for a sample of energy bars. The label claims that the bars have 20 grams of protein. We want to know if the labels are correct or not.
One-sample t -test assumptions
For a valid test, we need data values that are:
- Independent (values are not related to one another).
- Continuous.
- Obtained via a simple random sample from the population.
Also, the population is assumed to be normally distributed .
One-sample t -test example
Imagine we have collected a random sample of 31 energy bars from a number of different stores to represent the population of energy bars available to the general consumer. The labels on the bars claim that each bar contains 20 grams of protein.
Table 1: Grams of protein in random sample of energy bars
Energy Bar - Grams of Protein | ||||||
---|---|---|---|---|---|---|
20.70 | 27.46 | 22.15 | 19.85 | 21.29 | 24.75 | |
20.75 | 22.91 | 25.34 | 20.33 | 21.54 | 21.08 | |
22.14 | 19.56 | 21.10 | 18.04 | 24.12 | 19.95 | |
19.72 | 18.28 | 16.26 | 17.46 | 20.53 | 22.12 | |
25.06 | 22.44 | 19.08 | 19.88 | 21.39 | 22.33 | 25.79 |
If you look at the table above, you see that some bars have less than 20 grams of protein. Other bars have more. You might think that the data support the idea that the labels are correct. Others might disagree. The statistical test provides a sound method to make a decision, so that everyone makes the same decision on the same set of data values.
Checking the data
Let’s start by answering: Is the t -test an appropriate method to test that the energy bars have 20 grams of protein ? The list below checks the requirements for the test.
- The data values are independent. The grams of protein in one energy bar do not depend on the grams in any other energy bar. An example of dependent values would be if you collected energy bars from a single production lot. A sample from a single lot is representative of that lot, not energy bars in general.
- The data values are grams of protein. The measurements are continuous.
- We assume the energy bars are a simple random sample from the population of energy bars available to the general consumer (i.e., a mix of lots of bars).
- We assume the population from which we are collecting our sample is normally distributed, and for large samples, we can check this assumption.
We decide that the t -test is an appropriate method.
Before jumping into analysis, we should take a quick look at the data. The figure below shows a histogram and summary statistics for the energy bars.
From a quick look at the histogram, we see that there are no unusual points, or outliers . The data look roughly bell-shaped, so our assumption of a normal distribution seems reasonable.
From a quick look at the statistics, we see that the average is 21.40, above 20. Does this average from our sample of 31 bars invalidate the label's claim of 20 grams of protein for the unknown entire population mean? Or not?
How to perform the one-sample t -test
For the t -test calculations we need the mean, standard deviation and sample size. These are shown in the summary statistics section of Figure 1 above.
We round the statistics to two decimal places. Software will show more decimal places, and use them in calculations. (Note that Table 1 shows only two decimal places; the actual data used to calculate the summary statistics has more.)
We start by finding the difference between the sample mean and 20:
$ 21.40-20\ =\ 1.40$
Next, we calculate the standard error for the mean. The calculation is:
Standard Error for the mean = $ \frac{s}{\sqrt{n}}= \frac{2.54}{\sqrt{31}}=0.456 $
This matches the value in Figure 1 above.
We now have the pieces for our test statistic. We calculate our test statistic as:
$ t = \frac{\text{Difference}}{\text{Standard Error}}= \frac{1.40}{0.456}=3.07 $
To make our decision, we compare the test statistic to a value from the t- distribution. This activity involves four steps.
- We calculate a test statistic. Our test statistic is 3.07.
- We decide on the risk we are willing to take for declaring a difference when there is not a difference. For the energy bar data, we decide that we are willing to take a 5% risk of saying that the unknown population mean is different from 20 when in fact it is not. In statistics-speak, we set α = 0.05. In practice, setting your risk level (α) should be made before collecting the data.
We find the value from the t- distribution based on our decision. For a t -test, we need the degrees of freedom to find this value. The degrees of freedom are based on the sample size. For the energy bar data:
degrees of freedom = $ n - 1 = 31 - 1 = 30 $
The critical value of t with α = 0.05 and 30 degrees of freedom is +/- 2.043. Most statistics books have look-up tables for the distribution. You can also find tables online. The most likely situation is that you will use software and will not use printed tables.
We compare the value of our statistic (3.07) to the t value. Since 3.07 > 2.043, we reject the null hypothesis that the mean grams of protein is equal to 20. We make a practical conclusion that the labels are incorrect, and the population mean grams of protein is greater than 20.
Statistical details
Let’s look at the energy bar data and the 1-sample t -test using statistical terms.
Our null hypothesis is that the underlying population mean is equal to 20. The null hypothesis is written as:
$ H_o: \mathrm{\mu} = 20 $
The alternative hypothesis is that the underlying population mean is not equal to 20. The labels claiming 20 grams of protein would be incorrect. This is written as:
$ H_a: \mathrm{\mu} ≠ 20 $
This is a two-sided test. We are testing if the population mean is different from 20 grams in either direction. If we can reject the null hypothesis that the mean is equal to 20 grams, then we make a practical conclusion that the labels for the bars are incorrect. If we cannot reject the null hypothesis, then we make a practical conclusion that the labels for the bars may be correct.
We calculate the average for the sample and then calculate the difference with the population mean, mu:
$ \overline{x} - \mathrm{\mu} $
We calculate the standard error as:
$ \frac{s}{ \sqrt{n}} $
The formula shows the sample standard deviation as s and the sample size as n .
The test statistic uses the formula shown below:
$ \dfrac{\overline{x} - \mathrm{\mu}} {s / \sqrt{n}} $
We compare the test statistic to a t value with our chosen alpha value and the degrees of freedom for our data. Using the energy bar data as an example, we set α = 0.05. The degrees of freedom ( df ) are based on the sample size and are calculated as:
$ df = n - 1 = 31 - 1 = 30 $
Statisticians write the t value with α = 0.05 and 30 degrees of freedom as:
$ t_{0.05,30} $
The t value for a two-sided test with α = 0.05 and 30 degrees of freedom is +/- 2.042. There are two possible results from our comparison:
- The test statistic is less extreme than the critical t values; in other words, the test statistic is not less than -2.042, or is not greater than +2.042. You fail to reject the null hypothesis that the mean is equal to the specified value. In our example, you would be unable to conclude that the label for the protein bars should be changed.
- The test statistic is more extreme than the critical t values; in other words, the test statistic is less than -2.042, or is greater than +2.042. You reject the null hypothesis that the mean is equal to the specified value. In our example, you conclude that either the label should be updated or the production process should be improved to produce, on average, bars with 20 grams of protein.
Testing for normality
The normality assumption is more important for small sample sizes than for larger sample sizes.
Normal distributions are symmetric, which means they are “even” on both sides of the center. Normal distributions do not have extreme values, or outliers. You can check these two features of a normal distribution with graphs. Earlier, we decided that the energy bar data was “close enough” to normal to go ahead with the assumption of normality. The figure below shows a normal quantile plot for the data, and supports our decision.
You can also perform a formal test for normality using software. The figure below shows results of testing for normality with JMP software. We cannot reject the hypothesis of a normal distribution.
We can go ahead with the assumption that the energy bar data is normally distributed.
What if my data are not from a Normal distribution?
If your sample size is very small, it is hard to test for normality. In this situation, you might need to use your understanding of the measurements. For example, for the energy bar data, the company knows that the underlying distribution of grams of protein is normally distributed. Even for a very small sample, the company would likely go ahead with the t -test and assume normality.
What if you know the underlying measurements are not normally distributed? Or what if your sample size is large and the test for normality is rejected? In this situation, you can use a nonparametric test. Nonparametric analyses do not depend on an assumption that the data values are from a specific distribution. For the one-sample t -test, the one possible nonparametric test is the Wilcoxon Signed Rank test.
Understanding p-values
Using a visual, you can check to see if your test statistic is more extreme than a specified value in the distribution. The figure below shows a t- distribution with 30 degrees of freedom.
Since our test is two-sided and we set α = 0.05, the figure shows that the value of 2.042 “cuts off” 5% of the data in the tails combined.
The next figure shows our results. You can see the test statistic falls above the specified critical value. It is far enough “out in the tail” to reject the hypothesis that the mean is equal to 20.
Putting it all together with Software
You are likely to use software to perform a t -test. The figure below shows results for the 1-sample t -test for the energy bar data from JMP software.
The software shows the null hypothesis value of 20 and the average and standard deviation from the data. The test statistic is 3.07. This matches the calculations above.
The software shows results for a two-sided test and for one-sided tests. We want the two-sided test. Our null hypothesis is that the mean grams of protein is equal to 20. Our alternative hypothesis is that the mean grams of protein is not equal to 20. The software shows a p- value of 0.0046 for the two-sided test. This p- value describes the likelihood of seeing a sample average as extreme as 21.4, or more extreme, when the underlying population mean is actually 20; in other words, the probability of observing a sample mean as different, or even more different from 20, than the mean we observed in our sample. A p -value of 0.0046 means there is about 46 chances out of 10,000. We feel confident in rejecting the null hypothesis that the population mean is equal to 20.
Calcworkshop
One Sample T Test Easily Explained w/ 5+ Examples!
// Last Updated: October 9, 2020 - Watch Video //
Did you know that a hypothesis test for a sample mean is the same thing as a one sample t-test?
Jenn, Founder Calcworkshop ® , 15+ Years Experience (Licensed & Certified Teacher)
Learn the how-to with 5 step-by-step examples.
Let’s go!
What is a One Sample T Test?
A one sample t-test determines whether or not the sample mean is statistically different (statistically significant) from a population mean.
While significance tests for population proportions are based on z-scores and the normal distribution, hypothesis testing for population means depends on whether or not the population standard deviation is known or unknown.
For a one sample t test, we compare a test variable against a test value. And depending on whether or not we know the population standard deviation will determine what type of test variable we calculate.
T Test Vs. Z Test
So, determining whether or not to use a z-test or a t-test comes down to four things:
- Are we are working with a proportion (z-test) or mean (z-test or t-test)?
- Do you know the population standard deviation (z-test)?
- Is the population normally distributed (z-test)?
- What is the sample size? If the sample is less than 30 (t-test), if the sample is larger than 30 we can apply the central limit theorem as population is approximately normally.
How To Calculate a Test Statistic
Standard deviation known.
If the population standard deviation is known , then our significance test will follow a z-value. And as we learned while conducting confidence intervals, if our sample size is larger than 30, then our distribution is normal or approximately normal. And if our sample size is less than 30, we apply the Central Limit Theorem and deem our distribution approximately normal.
Z Test Statistic Formula
Standard Deviation Unknown
If the population standard deviation is unknown , we will use a sample standard deviation that will be close enough to the unknown population standard deviation. But this will also cause us to have to use a t-distribution instead of a normal distribution as noted by StatTrek .
Just like we saw with confidence intervals for population means, the t-distribution has an additional parameter representing the degrees of freedom or the number of observations that can be chosen freely.
T Test Statistic Formula
This means that our test statistic will be a t-value rather than a z-value. But thankfully, how we find our p-value and draw our final inference is the same as for hypothesis testing for proportions, as the graphic below illustrates.
How To Find The P Value
Example Question
For example, imagine a company wants to test the claim that their batteries last more than 40 hours. Using a simple random sample of 15 batteries yielded a mean of 44.9 hours, with a standard deviation of 8.9 hours. Test this claim using a significance level of 0.05.
One Sample T Test Example
How To Find P Value From T
So, our p-value is a probability, and it determines whether our test statistic is as extreme or more extreme then our test value, assuming that the null hypothesis is true. To find this value we either use a calculator or a t-table, as we will demonstrate in the video.
We have significant evidence to conclude the company’s claim that their batteries last more than 40 hours.
What Does The P Value Mean?
Together we will work through various examples of how to create a hypothesis test about population means using normal distributions and t-distributions.
One Sample T Test – Lesson & Examples (Video)
- Introduction to Video: One Sample t-test
- 00:00:43 – Steps for conducting a hypothesis test for population means (one sample z-test or one sample t-test)
- Exclusive Content for Members Only
- 00:03:49 – Conduct a hypothesis test and confidence interval when population standard deviation is known (Example #1)
- 00:13:49 – Test the null hypothesis when population standard deviation is known (Example #2)
- 00:18:56 – Use a one-sample t-test to test a claim (Example #3)
- 00:26:50 – Conduct a hypothesis test and confidence interval when population standard deviation is unknown (Example #4)
- 00:37:16 – Conduct a hypothesis test by using a one-sample t-test and provide a confidence interval (Example #5)
- 00:49:19 – Test the hypothesis by first finding the sample mean and standard deviation (Example #6)
- Practice Problems with Step-by-Step Solutions
- Chapter Tests with Video Solutions
Get access to all the courses and over 450 HD videos with your subscription
Monthly and Yearly Plans Available
Get My Subscription Now
Still wondering if CalcWorkshop is right for you? Take a Tour and find out how a membership can take the struggle out of learning math.
One Sample T Test – Clearly Explained with Examples | ML+
- October 8, 2020
- Selva Prabhakaran
One sample T-Test tests if the given sample of observations could have been generated from a population with a specified mean.
If it is found from the test that the means are statistically different, we infer that the sample is unlikely to have come from the population.
For example: If you want to test a car manufacturer’s claim that their cars give a highway mileage of 20kmpl on an average. You sample 10 cars from the dealership, measure their mileage and use the T-test to determine if the manufacturer’s claim is true.
By end of this, you will know when and how to do the T-Test, the concept, math, how to set the null and alternate hypothesis, how to use the T-tables, how to understand the one-tailed and two-tailed T-Test and see how to implement in R and Python using a practical example.
Introduction
Purpose of one sample t test, how to set the null and alternate hypothesis, procedure to do one sample t test, one sample t test example, one sample t test implementation, how to decide which t test to perform two tailed, upper tailed or lower tailed.
- Related Posts
The ‘One sample T Test’ is one of the 3 types of T Tests . It is used when you want to test if the mean of the population from which the sample is drawn is of a hypothesized value. You will understand this statement better (and all of about One Sample T test) better by the end of this post.
T Test was first invented by William Sealy Gosset, in 1908. Since he used the pseudo name as ‘Student’ when publishing his method in the paper titled ‘Biometrika’, the test came to be know as Student’s T Test.
Since it assumes that the test statistic, typically the sample mean, follows the sampling distribution, the Student’s T Test is considered as a Parametric test.
The purpose of the One Sample T Test is to determine if a sample observations could have come from a process that follows a specific parameter (like the mean).
It is typically implemented on small samples.
For example, given a sample of 15 items, you want to test if the sample mean is the same as a hypothesized mean (population). That is, essentially you want to know if the sample came from the given population or not.
Let’s suppose, you want to test if the mean weight of a manufactured component (from a sample size 15) is of a particular value (55 grams), with a 99% confidence.
How did we determine One sample T-test is the right test for this?
Because, there is only one sample involved and you want to compare the mean of this sample against a particular (hypothesized) value..
To do this, you need to set up a null hypothesis and an alternate hypothesis .
The null hypothesis usually assumes that there is no difference in the sample means and the hypothesized mean (comparison mean). The purpose of the T Test is to test if the null hypothesis can be rejected or not.
Depending on the how the problem is stated, the alternate hypothesis can be one of the following 3 cases:
- Case 1: H1 : x̅ != µ. Used when the true sample mean is not equal to the comparison mean. Use Two Tailed T Test.
- Case 2: H1 : x̅ > µ. Used when the true sample mean is greater than the comparison mean. Use Upper Tailed T Test.
- Case 3: H1 : x̅ < µ. Used when the true sample mean is lesser than the comparison mean. Use Lower Tailed T Test.
Where x̅ is the sample mean and µ is the population mean for comparison. We will go more into the detail of these three cases after solving some practical examples.
Example 1: A customer service company wants to know if their support agents are performing on par with industry standards.
According to a report the standard mean resolution time is 20 minutes per ticket. The sample group has a mean at 21 minutes per ticket with a standard deviation of 7 minutes.
Can you tell if the company’s support performance is better than the industry standard or not?
Example 2: A farming company wants to know if a new fertilizer has improved crop yield or not.
Historic data shows the average yield of the farm is 20 tonne per acre. They decide to test a new organic fertilizer on a smaller sample of farms and observe the new yield is 20.175 tonne per acre with a standard deviation of 3.02 tonne for 12 different farms.
Did the new fertilizer work?
Step 1: Define the Null Hypothesis (H0) and Alternate Hypothesis (H1)
H0: Sample mean (x̅) = Hypothesized Population mean (µ)
H1: Sample mean (x̅) != Hypothesized Population mean (µ)
The alternate hypothesis can also state that the sample mean is greater than or less than the comparison mean.
Step 2: Compute the test statistic (T)
$$t = \frac{Z}{s} = \frac{\bar{X} – \mu}{\frac{\hat{\sigma}}{\sqrt{n}}}$$
where s is the standard error .
Step 3: Find the T-critical from the T-Table
Use the degree of freedom and the alpha level (0.05) to find the T-critical.
Step 4: Determine if the computed test statistic falls in the rejection region.
Alternately, simply compute the P-value. If it is less than the significance level (0.05 or 0.01), reject the null hypothesis.
Problem Statement:
We have the potato yield from 12 different farms. We know that the standard potato yield for the given variety is µ=20.
Test if the potato yield from these farms is significantly better than the standard yield.
Step 1: Define the Null and Alternate Hypothesis
H0: x̅ = 20
H1: x̅ > 20
n = 12. Since this is one sample T test, the degree of freedom = n-1 = 12-1 = 11.
Let’s set alpha = 0.05, to meet 95% confidence level.
Step 2: Calculate the Test Statistic (T) 1. Calculate sample mean
$$\bar{X} = \frac{x_1 + x_2 + x_3 + . . + x_n}{n}$$
$$\bar{x} = 20.175$$
- Calculate sample standard deviation
$$\bar{\sigma} = \frac{(x_1 – \bar{x})^2 + (x_2 – \bar{x})^2 + (x_3 – \bar{x})^2 + . . + (x_n – \bar{x})^2}{n-1}$$
$$\sigma = 3.0211$$
- Substitute in the T Statistic formula
$$T = \frac{\bar{x} – \mu}{se} = \frac{\bar{x} – \mu}{\frac{\sigma}{\sqrt{n}}}$$
$$T = (20.175 – 20)/(3.0211/\sqrt{12}) = 0.2006$$
Step 3: Find the T-Critical
Confidence level = 0.95, alpha=0.05. For one tailed test, look under 0.05 column. For d.o.f = 12 – 1 = 11, T-Critical = 1.796 .
Now you might wonder why ‘One Tailed test’ was chosen. This is because of the way you define the alternate hypothesis. Had the null hypothesis simply stated that the sample means is not equal to 20, then we would have gone for a two tailed test. More details about this topic in the next section.
Step 4: Does it fall in rejection region?
Since the computed T Statistic is less than the T-critical, it does not fall in the rejection region.
Clearly, the calculated T statistic does not fall in the rejection region. So, we do not reject the null hypothesis.
Since you want to perform a ‘One Tailed Greater than’ test (that is, the sample mean is greater than the comparison mean), you need to specify alternative='greater' in the t.test() function. Because, by default, the t.test() does a two tailed test (which is what you do when your alternate hypothesis simply states sample mean != comparison mean).
The P-value computed here is nothing but p = Pr(T > t) (upper-tailed), where t is the calculated T statistic.
In Python, One sample T Test is implemented in ttest_1samp() function in the scipy package. However, it does a Two tailed test by default , and reports a signed T statistic. That means, the reported P-value will always be computed for a Two-tailed test. To calculate the correct P value, you need to divide the output P-value by 2.
Apply the following logic if you are performing a one tailed test:
For greater than test: Reject H0 if p/2 < alpha (0.05). In this case, t will be greater than 0. For lesser than test: Reject H0 if p/2 < alpha (0.05). In this case, t will be less than 0.
Since it is one tailed test, the real p-value is 0.8446/2 = 0.4223. We do not rejecting the Null Hypothesis anyway.
The decision of whether the computed test statistic falls in the rejection region depends on how the alternate hypothesis is defined.
We know the Null Hypothesis is H0: µD = 0. Where, µD is the difference in the means, that is sample mean minus the comparison mean.
You can also write H0 as: x̅ = µ , where x̅ is sample mean and ‘µ’ is the comparison mean.
Case 1: If H1 : x̅ != µ , then rejection region lies on both tails of the T-Distribution (two-tailed). This means the alternate hypothesis just states the difference in means is not equal. There is no comparison if one of the means is greater or lesser than the other.
In this case, use Two Tailed T Test .
Here, P value = 2 . Pr(T > | t |)
Case 2: If H1: x̅ > µ , then rejection region lies on upper tail of the T-Distribution (upper-tailed). If the mean of the sample of interest is greater than the comparison mean. Example: If Component A has a longer time-to-failure than Component B.
In such case, use Upper Tailed based test.
Here, P-value = Pr(T > t)
Case 3: If H1: x̅ < µ , then rejection region lies on lower tail of the T-Distribution (lower-tailed). If the mean of the sample of interest is lesser than the comparison mean.
In such case, use lower tailed test.
Here, P-value = Pr(T < t)
Hope you are now familiar and clear about with the One Sample T Test. If some thing is still not clear, write in comment. Next, topic is Two sample T test . Stay tuned.
More Articles
F statistic formula – explained, correlation – connecting the dots, the role of correlation in data analysis, hypothesis testing – a deep dive into hypothesis testing, the backbone of statistical inference, sampling and sampling distributions – a comprehensive guide on sampling and sampling distributions, law of large numbers – a deep dive into the world of statistics, central limit theorem – a deep dive into central limit theorem and its significance in statistics, similar articles, complete introduction to linear regression in r, how to implement common statistical significance tests and find the p value, logistic regression – a complete tutorial with examples in r.
Subscribe to Machine Learning Plus for high value data science content
© Machinelearningplus. All rights reserved.
Machine Learning A-Z™: Hands-On Python & R In Data Science
Free sample videos:.
One Sample T Test: SPSS, By Hand, Step by Step
- What is the One Sample T Test?
- Example (By Hand)
What is a One Sample T Test?
The one sample t test compares the mean of your sample data to a known value. For example, you might want to know how your sample mean compares to the population mean . You should run a one sample t test when you don’t know the population standard deviation or you have a small sample size . For a full rundown on which test to use, see: T-score vs. Z-Score .
Assumptions of the test (your data should meet these requirements for the test to be valid):
- Data is independent .
- Data is collected randomly. For example, with simple random sampling .
- The data is approximately normally distributed .
One Sample T Test Example
Watch the video below for an example or keep reading:
Can’t see the video? Click here to watch it on YouTube.
Example question : your company wants to improve sales. Past sales data indicate that the average sale was $100 per transaction. After training your sales force, recent sales data (taken from a sample of 25 salesmen) indicates an average sale of $130, with a standard deviation of $15. Did the training work? Test your hypothesis at a 5% alpha level .
Step 1: Write your null hypothesis statement ( How to state a null hypothesis ). The accepted hypothesis is that there is no difference in sales, so: H 0 : μ = $100.
Step 2: Write your alternate hypothesis . This is the one you’re testing in the one sample t test. You think that there is a difference (that the mean sales increased), so: H 1 : μ > $100.
Step 3: Identify the following pieces of information you’ll need to calculate the test statistic. The question should give you these items:
- The sample mean (x̄). This is given in the question as $130.
- The population mean (μ). Given as $100 (from past data).
- The sample standard deviation (s) = $15.
- Number of observations (n) = 25.
Step 5: Find the t-table value. You need two values to find this:
- The alpha level: given as 5% in the question.
- The degrees of freedom , which is the number of items in the sample (n) minus 1: 25 – 1 = 24.
Look up 24 degrees of freedom in the left column and 0.05 in the top row. The intersection is 1.711. This is your one-tailed critical t-value.
What this critical value means in a one tailed t test, is that we would expect most values to fall under 1.711. If our calculated t-value (from Step 4) falls within this range, the null hypothesis is likely true.
Step 5: Compare Step 4 to Step 5. The value from Step 4 does not fall into the range calculated in Step 5, so we can reject the null hypothesis . The value of 10 falls into the rejection region (the left tail).
In other words, it’s highly likely that the mean sale is greater. The one sample t test has told us that sales training was probably a success.
Want to check your work? Take a look at Daniel Soper’s calculator . Just plug in your data to get the t-statistic and critical values.
Beyer, W. H. CRC Standard Mathematical Tables, 31st ed. Boca Raton, FL: CRC Press, pp. 536 and 571, 2002. Agresti A. (1990) Categorical Data Analysis. John Wiley and Sons, New York. Friedman (2015). Fundamentals of Clinical Trials 5th ed. Springer.” Salkind, N. (2016). Statistics for People Who (Think They) Hate Statistics: Using Microsoft Excel 4th Edition.
Have a language expert improve your writing
Run a free plagiarism check in 10 minutes, generate accurate citations for free.
- Knowledge Base
An Introduction to t Tests | Definitions, Formula and Examples
Published on January 31, 2020 by Rebecca Bevans . Revised on June 22, 2023.
A t test is a statistical test that is used to compare the means of two groups. It is often used in hypothesis testing to determine whether a process or treatment actually has an effect on the population of interest, or whether two groups are different from one another.
- The null hypothesis ( H 0 ) is that the true difference between these group means is zero.
- The alternate hypothesis ( H a ) is that the true difference is different from zero.
Table of contents
When to use a t test, what type of t test should i use, performing a t test, interpreting test results, presenting the results of a t test, other interesting articles, frequently asked questions about t tests.
A t test can only be used when comparing the means of two groups (a.k.a. pairwise comparison). If you want to compare more than two groups, or if you want to do multiple pairwise comparisons, use an ANOVA test or a post-hoc test.
The t test is a parametric test of difference, meaning that it makes the same assumptions about your data as other parametric tests. The t test assumes your data:
- are independent
- are (approximately) normally distributed
- have a similar amount of variance within each group being compared (a.k.a. homogeneity of variance)
If your data do not fit these assumptions, you can try a nonparametric alternative to the t test, such as the Wilcoxon Signed-Rank test for data with unequal variances .
Prevent plagiarism. Run a free check.
When choosing a t test, you will need to consider two things: whether the groups being compared come from a single population or two different populations, and whether you want to test the difference in a specific direction.
One-sample, two-sample, or paired t test?
- If the groups come from a single population (e.g., measuring before and after an experimental treatment), perform a paired t test . This is a within-subjects design .
- If the groups come from two different populations (e.g., two different species, or people from two separate cities), perform a two-sample t test (a.k.a. independent t test ). This is a between-subjects design .
- If there is one group being compared against a standard value (e.g., comparing the acidity of a liquid to a neutral pH of 7), perform a one-sample t test .
One-tailed or two-tailed t test?
- If you only care whether the two populations are different from one another, perform a two-tailed t test .
- If you want to know whether one population mean is greater than or less than the other, perform a one-tailed t test.
- Your observations come from two separate populations (separate species), so you perform a two-sample t test.
- You don’t care about the direction of the difference, only whether there is a difference, so you choose to use a two-tailed t test.
The t test estimates the true difference between two group means using the ratio of the difference in group means over the pooled standard error of both groups. You can calculate it manually using a formula, or use statistical analysis software.
T test formula
The formula for the two-sample t test (a.k.a. the Student’s t-test) is shown below.
In this formula, t is the t value, x 1 and x 2 are the means of the two groups being compared, s 2 is the pooled standard error of the two groups, and n 1 and n 2 are the number of observations in each of the groups.
A larger t value shows that the difference between group means is greater than the pooled standard error, indicating a more significant difference between the groups.
You can compare your calculated t value against the values in a critical value chart (e.g., Student’s t table) to determine whether your t value is greater than what would be expected by chance. If so, you can reject the null hypothesis and conclude that the two groups are in fact different.
T test function in statistical software
Most statistical software (R, SPSS, etc.) includes a t test function. This built-in function will take your raw data and calculate the t value. It will then compare it to the critical value, and calculate a p -value . This way you can quickly see whether your groups are statistically different.
In your comparison of flower petal lengths, you decide to perform your t test using R. The code looks like this:
Download the data set to practice by yourself.
Sample data set
If you perform the t test for your flower hypothesis in R, you will receive the following output:
The output provides:
- An explanation of what is being compared, called data in the output table.
- The t value : -33.719. Note that it’s negative; this is fine! In most cases, we only care about the absolute value of the difference, or the distance from 0. It doesn’t matter which direction.
- The degrees of freedom : 30.196. Degrees of freedom is related to your sample size, and shows how many ‘free’ data points are available in your test for making comparisons. The greater the degrees of freedom, the better your statistical test will work.
- The p value : 2.2e-16 (i.e. 2.2 with 15 zeros in front). This describes the probability that you would see a t value as large as this one by chance.
- A statement of the alternative hypothesis ( H a ). In this test, the H a is that the difference is not 0.
- The 95% confidence interval . This is the range of numbers within which the true difference in means will be 95% of the time. This can be changed from 95% if you want a larger or smaller interval, but 95% is very commonly used.
- The mean petal length for each group.
Here's why students love Scribbr's proofreading services
Discover proofreading & editing
When reporting your t test results, the most important values to include are the t value , the p value , and the degrees of freedom for the test. These will communicate to your audience whether the difference between the two groups is statistically significant (a.k.a. that it is unlikely to have happened by chance).
You can also include the summary statistics for the groups being compared, namely the mean and standard deviation . In R, the code for calculating the mean and the standard deviation from the data looks like this:
flower.data %>% group_by(Species) %>% summarize(mean_length = mean(Petal.Length), sd_length = sd(Petal.Length))
In our example, you would report the results like this:
If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.
- Chi square test of independence
- Statistical power
- Descriptive statistics
- Degrees of freedom
- Pearson correlation
- Null hypothesis
Methodology
- Double-blind study
- Case-control study
- Research ethics
- Data collection
- Hypothesis testing
- Structured interviews
Research bias
- Hawthorne effect
- Unconscious bias
- Recall bias
- Halo effect
- Self-serving bias
- Information bias
A t-test is a statistical test that compares the means of two samples . It is used in hypothesis testing , with a null hypothesis that the difference in group means is zero and an alternate hypothesis that the difference in group means is different from zero.
A t-test measures the difference in group means divided by the pooled standard error of the two group means.
In this way, it calculates a number (the t-value) illustrating the magnitude of the difference between the two group means being compared, and estimates the likelihood that this difference exists purely by chance (p-value).
Your choice of t-test depends on whether you are studying one group or two groups, and whether you care about the direction of the difference in group means.
If you are studying one group, use a paired t-test to compare the group mean over time or after an intervention, or use a one-sample t-test to compare the group mean to a standard value. If you are studying two groups, use a two-sample t-test .
If you want to know only whether a difference exists, use a two-tailed test . If you want to know if one group mean is greater or less than the other, use a left-tailed or right-tailed one-tailed test .
A one-sample t-test is used to compare a single population to a standard value (for example, to determine whether the average lifespan of a specific town is different from the country average).
A paired t-test is used to compare a single population before and after some experimental intervention or at two different points in time (for example, measuring student performance on a test before and after being taught the material).
A t-test should not be used to measure differences among more than two groups, because the error structure for a t-test will underestimate the actual error when many groups are being compared.
If you want to compare the means of several groups at once, it’s best to use another statistical test such as ANOVA or a post-hoc test.
Cite this Scribbr article
If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.
Bevans, R. (2023, June 22). An Introduction to t Tests | Definitions, Formula and Examples. Scribbr. Retrieved October 7, 2024, from https://www.scribbr.com/statistics/t-test/
Is this article helpful?
Rebecca Bevans
Other students also liked, choosing the right statistical test | types & examples, hypothesis testing | a step-by-step guide with easy examples, test statistics | definition, interpretation, and examples, what is your plagiarism score.
- Skip to secondary menu
- Skip to main content
- Skip to primary sidebar
Statistics By Jim
Making statistics intuitive
How t-Tests Work: 1-sample, 2-sample, and Paired t-Tests
By Jim Frost 15 Comments
T-tests are statistical hypothesis tests that analyze one or two sample means. When you analyze your data with any t-test, the procedure reduces your entire sample to a single value, the t-value. In this post, I describe how each type of t-test calculates the t-value. I don’t explain this just so you can understand the calculation, but I describe it in a way that really helps you grasp how t-tests work.
How 1-Sample t-Tests Calculate t-Values
The equation for how the 1-sample t-test produces a t-value based on your sample is below:
This equation is a ratio, and a common analogy is the signal-to-noise ratio. The numerator is the signal in your sample data, and the denominator is the noise. Let’s see how t-tests work by comparing the signal to the noise!
The Signal – The Size of the Sample Effect
In the signal-to-noise analogy, the numerator of the ratio is the signal. The effect that is present in the sample is the signal. It’s a simple calculation. In a 1-sample t-test, the sample effect is the sample mean minus the value of the null hypothesis. That’s the top part of the equation.
For example, if the sample mean is 20 and the null value is 5, the sample effect size is 15. We’re calling this the signal because this sample estimate is our best estimate of the population effect.
The calculation for the signal portion of t-values is such that when the sample effect equals zero, the numerator equals zero, which in turn means the t-value itself equals zero. The estimated sample effect (signal) equals zero when there is no difference between the sample mean and the null hypothesis value. For example, if the sample mean is 5 and the null value is 5, the signal equals zero (5 – 5 = 0).
The size of the signal increases when the difference between the sample mean and null value increases. The difference can be either negative or positive, depending on whether the sample mean is greater than or less than the value associated with the null hypothesis.
A relatively large signal in the numerator produces t-values that are further away from zero.
The Noise – The Variability or Random Error in the Sample
The denominator of the ratio is the standard error of the mean, which measures the sample variation. The standard error of the mean represents how much random error is in the sample and how well the sample estimates the population mean.
As the value of this statistic increases, the sample mean provides a less precise estimate of the population mean. In other words, high levels of random error increase the probability that your sample mean is further away from the population mean.
In our analogy, random error represents noise. Why? When there is more random error, you are more likely to see considerable differences between the sample mean and the null hypothesis value in cases where the null is true . Noise appears in the denominator to provide a benchmark for how large the signal must be to distinguish from the noise.
Signal-to-Noise ratio
Our signal-to-noise ratio analogy equates to:
Both of these statistics are in the same units as your data. Let’s calculate a couple of t-values to see how to interpret them.
- If the signal is 10 and the noise is 2, your t-value is 5. The signal is 5 times the noise.
- If the signal is 10 and the noise is 5, your t-value is 2. The signal is 2 times the noise.
The signal is the same in both examples, but it is easier to distinguish from the lower amount of noise in the first example. In this manner, t-values indicate how clear the signal is from the noise. If the signal is of the same general magnitude as the noise, it’s probable that random error causes the difference between the sample mean and null value rather than an actual population effect.
Paired t-Tests Are Really 1-Sample t-Tests
Paired t-tests require dependent samples. I’ve seen a lot of confusion over how a paired t-test works and when you should use it. Pssst! Here’s a secret! Paired t-tests and 1-sample t-tests are the same hypothesis test incognito!
You use a 1-sample t-test to assess the difference between a sample mean and the value of the null hypothesis.
A paired t-test takes paired observations (like before and after), subtracts one from the other, and conducts a 1-sample t-test on the differences. Typically, a paired t-test determines whether the paired differences are significantly different from zero.
Download the CSV data file to check this yourself: T-testData . All of the statistical results are the same when you perform a paired t-test using the Before and After columns versus performing a 1-sample t-test on the Differences column.
Once you realize that paired t-tests are the same as 1-sample t-tests on paired differences, you can focus on the deciding characteristic —does it make sense to analyze the differences between two columns?
Suppose the Before and After columns contain test scores and there was an intervention in between. If each row in the data contains the same subject in the Before and After column, it makes sense to find the difference between the columns because it represents how much each subject changed after the intervention. The paired t-test is a good choice.
On the other hand, if a row has different subjects in the Before and After columns, it doesn’t make sense to subtract the columns. You should use the 2-sample t-test described below.
The paired t-test is a convenience for you. It eliminates the need for you to calculate the difference between two columns yourself. Remember, double-check that this difference is meaningful! If using a paired t-test is valid, you should use it because it provides more statistical power than the 2-sample t-test, which I discuss in my post about independent and dependent samples .
How Two-Sample T-tests Calculate T-Values
Use the 2-sample t-test when you want to analyze the difference between the means of two independent samples. This test is also known as the independent samples t-test . Click the link to learn more about its hypotheses, assumptions, and interpretations.
Like the other t-tests, this procedure reduces all of your data to a single t-value in a process similar to the 1-sample t-test. The signal-to-noise analogy still applies.
Here’s the equation for the t-value in a 2-sample t-test.
The equation is still a ratio, and the numerator still represents the signal. For a 2-sample t-test, the signal, or effect, is the difference between the two sample means. This calculation is straightforward. If the first sample mean is 20 and the second mean is 15, the effect is 5.
Typically, the null hypothesis states that there is no difference between the two samples. In the equation, if both groups have the same mean, the numerator, and the ratio as a whole, equals zero. Larger differences between the sample means produce stronger signals.
The denominator again represents the noise for a 2-sample t-test. However, you can use two different values depending on whether you assume that the variation in the two groups is equal or not. Most statistical software let you choose which value to use.
Regardless of the denominator value you use, the 2-sample t-test works by determining how distinguishable the signal is from the noise. To ascertain that the difference between means is statistically significant, you need a high positive or negative t-value.
How Do T-tests Use T-values to Determine Statistical Significance?
Here’s what we’ve learned about the t-values for the 1-sample t-test, paired t-test, and 2-sample t-test:
- Each test reduces your sample data down to a single t-value based on the ratio of the effect size to the variability in your sample.
- A t-value of zero indicates that your sample results match the null hypothesis precisely.
- Larger absolute t-values represent stronger signals, or effects, that stand out more from the noise.
For example, a t-value of 2 indicates that the signal is twice the magnitude of the noise.
Great … but how do you get from that to determining whether the effect size is statistically significant? After all, the purpose of t-tests is to assess hypotheses. To find out, read the companion post to this one: How t-Tests Work: t-Values, t-Distributions and Probabilities . Click here for step-by-step instructions on how to do t-tests in Excel !
If you’d like to learn about other hypothesis tests using the same general approach, read my posts about:
- How F-tests Work in ANOVA
- How Chi-Squared Tests of Independence Work
Share this:
Reader Interactions
January 9, 2023 at 11:11 am
Hi Jim, thank you for explaining this I will revert to this during my 8 weeks in class everyday to make sure I understand what I’m doing . May I ask more questions in the future.
November 27, 2021 at 1:37 pm
This was an awesome piece, very educative and easy to understand
June 19, 2021 at 1:53 pm
Hi Jim, I found your posts very helpful. Could you plz explain how to do T test for a panel data?
June 19, 2021 at 3:40 pm
You’re limited by what you can do with t-tests. For panel data and t-tests, you can compare the same subjects at two points in time using a paired t-test. For more complex arrangements, you can use repeated measures ANOVA or specify a regression model to meet your needs.
February 11, 2020 at 10:34 pm
Hi Jim: I was reviewing this post in preparation for an analysis I plan to do, and I’d like to ask your advice. Each year, staff complete an all-employee survey, and results are reported at workgroup level of analysis. I would like to compare mean scores of several workgroups from one year to the next (in this case, 2018 and 2019 scores). For example, I would compare workgroup mean scores on psychological safety between 2018 and 2019. I am leaning toward a paired t test. However, my one concern is that….even though I am comparing workgroup to workgroup from one year to the next….it is certainly possible that there may be some different employees in a given workgroup from one year to the next (turnover, transition, etc.)….Assuming that is the case with at least some of the workgroups, does that make a paired t test less meanginful? Would I still use a paired t test or would another type t test be more appropriate? I’m thinking because we are dealing with workgroup mean scores (and not individual scores), then it may still be okay to compare meaningfully (avoiding an ecological fallacy). Thoughts?
Many thanks for these great posts. I enjoy reading them…!
April 8, 2019 at 11:22 pm
Hi jim. First of all, I really appreciate your posts!
When I use t-test via R or scikit learn, there is an option for homogeneity of variance. I think that option only applied to two sample t-test, but what should I do for that option?
Should I always perform f-test for check the homogeneity of variance? or Which one is a more strict assumption?
November 9, 2018 at 12:03 am
This blog is great. I’m at Stanford and can say this is a great supplement to class lectures. I love the fact that there aren’t formulas so as to get an intuitive feel. Thank you so much!
November 9, 2018 at 9:12 am
Thanks Mel! I’m glad it has been helpful! Your kind words mean a lot to me because I really strive to make these topics as easy to understand as possible!
December 29, 2017 at 4:14 pm
Thank you so much Jim! I have such a hard time understanding statistics without people like you who explain it using words to help me conceptualize rather than utilizing symbols only!
December 29, 2017 at 4:56 pm
Thank you, Jessica! Your kind words made my day. That’s what I want my blog to be all about. Providing simple but 100% accurate explanations for statistical concepts!
Happy New Year!
October 22, 2017 at 2:38 pm
Hi Jim, sure, I’ll go through it…Thank you..!
October 22, 2017 at 4:50 am
In summary, the t test tells, how the sample mean is different from null hypothesis, i.e. how the sample mean is different from null, but how does it comment about the significance? Is it like “more far from null is the more significant”? If it is so, could you give some more explanation about it?
October 22, 2017 at 2:30 pm
Hi Omkar, you’re in luck, I’ve written an entire blog post that talks about how t-tests actually use the t-values to determine statistical significance. In general, the further away from zero, the more significant it is. For all the information, read this post: How t-Tests Work: t-Values, t-Distributions, and Probabilities . I think this post will answer your questions.
September 12, 2017 at 2:46 am
Excellent explanation, appreciate you..!!
September 12, 2017 at 8:48 am
Thank you, Santhosh! I’m glad you found it helpful!
Comments and Questions Cancel reply
One sample t-test
The t-test is one of the most common hypothesis tests in statistics. The t-test determines either whether the sample mean and the mean of the population differ or if two sample means differ statistically. The t-test distinguishes between
- one sample t-test
- independent sample t-test
- paired samples t-test
The choice of which t-test to use depends on whether one or two samples are available. If two samples are available, a distinction is made between dependent and independent samples. In this tutorial you will find everything about the one sample t-test .
Tip: Do you want to calculate the t-value? You can easily calculate it for all three t-tests online in the t-test calculator on DATAtab
The one sample t-test is used to test whether the population differs from a fixed value. So, the question is: Are there statistically significant differences between a sample mean and the fixed value? The set value may, for example, reflect the remaining population percentage or a set quality target that is to be controlled.
Social science example:
You want to find out whether the health perception of managers in Canada differs from that of the population as a whole. For this purpose you ask 50 managers about their perception of health.
Technical example:
You want to find out if the screws your company produces really weigh 10 grams on average. To test this, weigh 50 screws and compare the actual weight with the weight they should have (10 grams).
Medical example:
A pharmaceutical company promises that its new drug lowers blood pressure by 10 mmHg in one week. You want to find out if this is correct. To do this, compare the observed reduction in blood pressure of 75 test subjects with the expected reduction of 10 mmHg.
Assumptions
In a one sample t-test, the data under consideration must be from a random sample, have metric scale of measurement , and be normally distributed.
One tailed and two tailed t-test
So if you want to know whether a sample differs from the population, you have to calculate a one sample t-test . But before the t-test can be calculated, a question and the hypotheses must first be defined. This determines whether a one tailed (directional) or a two tailed (non-directional) t-test must be calculated.
The question helps you to define the object of investigation. In the case of the one sample t-test the question is:
Two tailed (non-directional)
Is there a statistically significant difference between the mean value of the sample and the population?
One tailed (directional)
Is the mean value of the sample significantly larger (or smaller) than the mean value of the population?
For the examples above, this gives us the following questions:
- Does the health perception of managers in Canada differ from that of the overall population in Canada?
- Does the production plant produce screws with a weight of 10 grams?
- Does the new drug lower blood pressure by 10 mmHg within one week?
Hypotheses t-Test
In order to perform a one sample t-test, the following hypotheses are formulated:
- Null hypothesis H 0 : The mean value of the population is equal to the specified value.
- Alternative hypothesis H 1 : The mean value of the population is not equal to the specified value.
- Null hypothesis H 0 : The mean value of the population is equal to or greater than (or less than) that of the specified value.
- Alternative hypothesis H 1 : The mean value of the population is smaller (or larger) than the specified values.
One sample t-test equation
You can calculate the t-test either with a statistics software like DATAtab or by hand. For the calculation by hand you first need the test statistics t , which can be calculated for the one sample t-test with the equation
In order to check whether the mean sample value differs significantly from that of the population, the critical t-value must be calculated. First the number of degrees of freedom, abbreviated df , is required, which is calculated by taking the number of samples minus one.
where the standard deviation is the population standard deviation estimated using the sample.
If the number of degrees of freedom is known, the critical t-value can be determined using the table of t-values . For a sample of 12 people, the degree of freedom is 11, and the significance level is assumed to be 5 %. The table below shows the t values for a one tailed open distribution. Depending on whether you want to calculate a one tailed (directional) or two tailed (non-directional) t-test, you must read the t value at either 0.95 or 0.975. For the non-directional hypothesis and an significance level of 5%, the critical t-value is 2.201.
If the calculated t value is below the critical t value, there is no significant difference between the sample and the population; if it is above the critical t value, there is a significant difference.
Area one tailed | ||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Degree of Freedom | 0.5 | 0.75 | 0.8 | 0.85 | 0.9 | 0.95 | 0.975 | 0.99 | 0.995 | 0.999 | 0.9995 | |||||||||||
... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | |||||||||||
9 | 0 | 0.703 | 0.883 | 1.1 | 1.383 | 1.833 | 2.262 | 2.821 | 3.25 | 4.297 | 4.781 | |||||||||||
10 | 0 | 0.7 | 0.879 | 1.093 | 1.372 | 1.812 | 2.228 | 2.764 | 3.169 | 4.144 | 4.587 | |||||||||||
11 | 0 | 0.697 | 0.876 | 1.088 | 1.363 | 1.796 | 2.201 | 2.718 | 3.106 | 4.025 | 4.437 | |||||||||||
12 | 0 | 0.695 | 0.873 | 1.083 | 1.356 | 1.782 | 2.179 | 2.681 | 3.055 | 3.93 | 4.318 | |||||||||||
13 | 0 | 0.694 | 0.87 | 1.079 | 1.35 | 1.771 | 2.16 | 2.65 | 3.012 | 3.852 | 4.221 | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
Interpret t-value
The t-value is calculated by dividing the measured difference by the scatter in the sample data. The larger the magnitude of t, the more this argues against the null hypothesis. If the calculated t-value is larger than the critical t-value, the null hypothesis is rejected.
Number of degrees of freedom - df
The number of degrees of freedom indicates how many values are allowed to vary freely. The degrees of freedom are therefore the number of independent individual pieces of information.
One sample t-test example
As an example for the t-test for one sample, we examine whether an online statistics tutorial newly introduced at the university has an effect on the students' examination results.
The average score in the statistics test at a university has been 28 points for years. This semester a new online statistics tutorial was introduced. Now the course management would like to know whether the success of the studies has changed since the introduction of the statistics tutorial: Does the online statistics tutorial have a positive effect on exam results?
The population considered is all students who have written the statistics exam since the new statistics tutorial was introduced. The reference value to be compared is 28.
Null hypothesis H0
The mean value from the sample and the predefined value does not differ significantly. The online statistics tutorial has no significant effect on exam results.
Student | Score |
---|---|
1 | 28 |
2 | 29 |
3 | 35 |
4 | 37 |
5 | 32 |
6 | 26 |
7 | 37 |
8 | 39 |
9 | 22 |
10 | 29 |
11 | 36 |
12 | 38 |
Here's how it goes on DATAtab:
Do you want to calculate a t-test independently? Calculate the example in the Statistics Calculator. Just copy the upper table including the first row into the t-Test Calculator . Datatab will then provide you with the tables below.
The following results are obtained with DATAtab: The mean value is 32.33 and the standard deviation 5.46. This leads to a standard error of the mean value of 1.57. The t-statistic thus gives 2.75
You would now like to know whether your hypothesis (the score is 28) is significant or not. To do this, you first specify a significance level in Datatab, usually 5% is used, which is preselected. Now you will get the table below in Datatab.
n | Mean value | Standard deviation | Standard error of the mean value | |
---|---|---|---|---|
Score | 12 | 32.33 | 5.47 | 1.58 |
One sample t-test (Test Value = 28)
t | df | p | |
---|---|---|---|
Score | 2.75 | 11 | 0.02 |
95% confidence interval of the difference
Mean value difference | Lower | Upper | |
---|---|---|---|
Score | 4.33 | 0.86 | 7.81 |
To interpret whether your hypothesis is significant one of the two values can be used:
- p-value (2-tailed)
- lower and upper confidence interval of the difference
In this example p-value (2-tailed) is equal to 0.02, i.e. 2 %. Put into words this means: The probability that a sample with a mean difference of 4.33 or more will be drawn from the population is 2%. The significance level was set at 5%, which is greater than 2%. For this reason, a significant difference between the sample and the population is assumed.
Whether or not there is a significant difference can also be read from the confidence interval of the difference. If the lower and upper limits go throw zero, there is no significant difference. If this is not the case, there is a significant difference. In this example, the lower value is 0.86 and the upper value is 7.81. Since the lower and upper values do not touch zero, there is a significant difference.
APA Style | One sample t-test
If we were to write the top results for publication in an APA journal, that is, in an APA format, we would write it that way:
A t-test showed a statistically reliable difference between the score of students who attended the online course and the average score of students who did not attend an online course. (M = 32.33, s = 5.47) and 28, t(11) = 2.75, p < 0.02, α = 0.05.
Statistics made easy
- many illustrative examples
- ideal for exams and theses
- statistics made easy on 412 pages
- 5rd revised edition (April 2024)
- Only 8.99 €
"Super simple written"
"It could not be simpler"
"So many helpful examples"
Cite DATAtab: DATAtab Team (2024). DATAtab: Online Statistics Calculator. DATAtab e.U. Graz, Austria. URL https://datatab.net
IMAGES
VIDEO
COMMENTS
A one sample t-test is used to test whether or not the mean of a population is equal to some value. This tutorial explains the following: The motivation for performing a one sample t-test. The formula to perform a one sample t-test. The assumptions that should be met to perform a one sample t-test.
A one-sample t-test can use the sample data to determine whether the entire population of soda cans differs from the hypothesized value of 12 ounces. In this post, learn about the one-sample t-test, its hypotheses and assumptions, and how to interpret the results.
The one-sample t-test is a statistical hypothesis test used to determine whether an unknown population mean is different from a specific value. Check out our example.
What is a One Sample T Test? A one sample t-test determines whether or not the sample mean is statistically different (statistically significant) from a population mean.
Use a one-sample t test to compare a sample mean to a reference value. It allows you to determine whether the population mean differs from the reference value. The reference value is usually highly relevant to the subject area.
One sample T-Test tests if the mean of a given sample is statistically different from a known value (a hypothesized population mean). If it is found from the test that the means are statistically different, we infer that the sample is unlikely to have come from the population.
The one sample t test compares the mean of your sample data to a known value. For example, you might want to know how your sample mean compares to the population mean. You should run a one sample t test when you don’t know the population standard deviation or you have a small sample size.
A t test is a statistical test that is used to compare the means of two groups. It is often used in hypothesis testing to determine whether a process or treatment actually has an effect on the population of interest, or whether two groups are different from one another.
T-tests are statistical hypothesis tests that analyze one or two sample means. When you analyze your data with any t-test, the procedure reduces your entire sample to a single value, the t-value. In this post, I describe how each type of t-test calculates the t-value.
The one sample t-test is used to test whether the population differs from a fixed value. So, the question is: Are there statistically significant differences between a sample mean and the fixed value? The set value may, for example, reflect the remaining population percentage or a set quality target that is to be controlled.