JavaScript seems to be disabled in your browser. For the best experience on our site, be sure to turn on Javascript in your browser.

- Order Tracking
- Create an Account

## 200+ Award-Winning Educational Textbooks, Activity Books, & Printable eBooks!

- Compare Products

Reading, Writing, Math, Science, Social Studies

- Search by Book Series
- Algebra I & II Gr. 7-12+
- Algebra Magic Tricks Gr. 2-12+
- Algebra Word Problems Gr. 7-12+
- Balance Benders Gr. 2-12+
- Balance Math & More! Gr. 2-12+
- Basics of Critical Thinking Gr. 4-7
- Brain Stretchers Gr. 5-12+
- Building Thinking Skills Gr. Toddler-12+
- Building Writing Skills Gr. 3-7
- Bundles - Critical Thinking Gr. PreK-9
- Bundles - Language Arts Gr. K-8
- Bundles - Mathematics Gr. PreK-9
- Bundles - Multi-Subject Curriculum Gr. PreK-12+
- Bundles - Test Prep Gr. Toddler-12+
- Can You Find Me? Gr. PreK-1
- Complete the Picture Math Gr. 1-3
- Cornell Critical Thinking Tests Gr. 5-12+
- Cranium Crackers Gr. 3-12+
- Creative Problem Solving Gr. PreK-2
- Critical Thinking Activities to Improve Writing Gr. 4-12+
- Critical Thinking Coloring Gr. PreK-2
- Critical Thinking Detective Gr. 3-12+
- Critical Thinking Tests Gr. PreK-6
- Critical Thinking for Reading Comprehension Gr. 1-5
- Critical Thinking in United States History Gr. 6-12+
- CrossNumber Math Puzzles Gr. 4-10
- Crypt-O-Words Gr. 2-7
- Crypto Mind Benders Gr. 3-12+
- Daily Mind Builders Gr. 5-12+
- Dare to Compare Math Gr. 2-7
- Developing Critical Thinking through Science Gr. 1-8
- Dr. DooRiddles Gr. PreK-12+
- Dr. Funster's Gr. 2-12+
- Editor in Chief Gr. 2-12+
- Fun-Time Phonics! Gr. PreK-2
- Half 'n Half Animals Gr. K-4
- Hands-On Thinking Skills Gr. K-1
- Inference Jones Gr. 1-6
- James Madison Gr. 10-12+
- Jumbles Gr. 3-5
- Language Mechanic Gr. 4-7
- Language Smarts Gr. 1-4
- Mastering Logic & Math Problem Solving Gr. 6-9
- Math Analogies Gr. K-9
- Math Detective Gr. 3-8
- Math Games Gr. 3-8
- Math Mind Benders Gr. 5-12+
- Math Ties Gr. 4-8
- Math Word Problems Gr. 4-10
- Mathematical Reasoning Gr. Toddler-11
- Middle School Science Gr. 6-8
- Mind Benders Gr. PreK-12+
- Mind Building Math Gr. K-1
- Mind Building Reading Gr. K-1
- Novel Thinking Gr. 3-6
- OLSAT® Test Prep Gr. PreK-K
- Organizing Thinking Gr. 2-8
- Pattern Explorer Gr. 3-9
- Practical Critical Thinking Gr. 8-12+
- Punctuation Puzzler Gr. 3-8
- Reading Detective Gr. 3-12+
- Red Herring Mysteries Gr. 4-12+
- Red Herrings Science Mysteries Gr. 4-9
- Science Detective Gr. 3-6
- Science Mind Benders Gr. PreK-3
- Science Vocabulary Crossword Puzzles Gr. 4-6
- Sciencewise Gr. 4-12+
- Scratch Your Brain Gr. 2-12+
- Sentence Diagramming Gr. 3-12+
- Smarty Pants Puzzles Gr. 3-12+
- Snailopolis Gr. K-4
- Something's Fishy at Lake Iwannafisha Gr. 5-9
- Teaching Technology Gr. 3-12+
- Tell Me a Story Gr. PreK-1
- Think Analogies Gr. 3-12+
- Think and Write Gr. 3-8
- Think-A-Grams Gr. 4-12+
- Thinking About Time Gr. 3-6
- Thinking Connections Gr. 4-12+
- Thinking Directionally Gr. 2-6
- Thinking Skills & Key Concepts Gr. PreK-2
- Thinking Skills for Tests Gr. PreK-5
- U.S. History Detective Gr. 8-12+
- Understanding Fractions Gr. 2-6
- Visual Perceptual Skill Building Gr. PreK-3
- Vocabulary Riddles Gr. 4-8
- Vocabulary Smarts Gr. 2-5
- Vocabulary Virtuoso Gr. 2-12+
- What Would You Do? Gr. 2-12+
- Who Is This Kid? Colleges Want to Know! Gr. 9-12+
- Word Explorer Gr. 6-8
- Word Roots Gr. 3-12+
- World History Detective Gr. 6-12+
- Writing Detective Gr. 3-6
- You Decide! Gr. 6-12+

- Special of the Month
- Sign Up for our Best Offers
- Bundles = Greatest Savings!
- Sign Up for Free Puzzles
- Sign Up for Free Activities
- Toddler (Ages 0-3)
- PreK (Ages 3-5)
- Kindergarten (Ages 5-6)
- 1st Grade (Ages 6-7)
- 2nd Grade (Ages 7-8)
- 3rd Grade (Ages 8-9)
- 4th Grade (Ages 9-10)
- 5th Grade (Ages 10-11)
- 6th Grade (Ages 11-12)
- 7th Grade (Ages 12-13)
- 8th Grade (Ages 13-14)
- 9th Grade (Ages 14-15)
- 10th Grade (Ages 15-16)
- 11th Grade (Ages 16-17)
- 12th Grade (Ages 17-18)
- 12th+ Grade (Ages 18+)
- Test Prep Directory
- Test Prep Bundles
- Test Prep Guides
- Preschool Academics
- Store Locator
- Submit Feedback/Request
- Sales Alerts Sign-Up
- Technical Support
- Mission & History
- Articles & Advice
- Testimonials
- Our Guarantee
- New Products
- Free Activities
- Libros en Español

## Guide To Inductive & Deductive Reasoning

Induction vs. Deduction

October 15, 2008, by The Critical Thinking Co. Staff

Induction and deduction are pervasive elements in critical thinking. They are also somewhat misunderstood terms. Arguments based on experience or observation are best expressed inductively , while arguments based on laws or rules are best expressed deductively . Most arguments are mainly inductive. In fact, inductive reasoning usually comes much more naturally to us than deductive reasoning.

Inductive reasoning moves from specific details and observations (typically of nature) to the more general underlying principles or process that explains them (e.g., Newton's Law of Gravity). It is open-ended and exploratory, especially at the beginning. The premises of an inductive argument are believed to support the conclusion, but do not ensure it. Thus, the conclusion of an induction is regarded as a hypothesis. In the Inductive method, also called the scientific method , observation of nature is the authority.

In contrast, deductive reasoning typically moves from general truths to specific conclusions. It opens with an expansive explanation (statements known or believed to be true) and continues with predictions for specific observations supporting it. Deductive reasoning is narrow in nature and is concerned with testing or confirming a hypothesis. It is dependent on its premises. For example, a false premise can lead to a false result, and inconclusive premises will also yield an inconclusive conclusion. Deductive reasoning leads to a confirmation (or not) of our original theories. It guarantees the correctness of a conclusion. Logic is the authority in the deductive method.

If you can strengthen your argument or hypothesis by adding another piece of information, you are using inductive reasoning. If you cannot improve your argument by adding more evidence, you are employing deductive reasoning.

## Jessie Ball duPont Library

Critical thinking skills, what is inductive reasoning.

- Logical Fallacies
- Need Research Help?

## Research Librarian

Inductive reasoning: conclusion merely likely Inductive reasoning begins with observations that are specific and limited in scope, and proceeds to a generalized conclusion that is likely, but not certain, in light of accumulated evidence. You could say that inductive reasoning moves from the specific to the general. Much scientific research is carried out by the inductive method: gathering evidence, seeking patterns, and forming a hypothesis or theory to explain what is seen.

Conclusions reached by the inductive method are not logical necessities; no amount of inductive evidence guarantees the conclusion. This is because there is no way to know that all the possible evidence has been gathered, and that there exists no further bit of unobserved evidence that might invalidate my hypothesis. Thus, while the newspapers might report the conclusions of scientific research as absolutes, scientific literature itself uses more cautious language, the language of inductively reached, probable conclusions:

What we have seen is the ability of these cells to feed the blood vessels of tumors and to heal the blood vessels surrounding wounds. The findings suggest that these adult stem cells may be an ideal source of cells for clinical therapy. For example, we can envision the use of these stem cells for therapies against cancer tumors [...].

Because inductive conclusions are not logical necessities, inductive arguments are not simply true. Rather, they are cogent: that is, the evidence seems complete, relevant, and generally convincing, and the conclusion is therefore probably true. Nor are inductive arguments simply false; rather, they are not cogent .

It is an important difference from deductive reasoning that, while inductive reasoning cannot yield an absolutely certain conclusion, it can actually increase human knowledge (it is ampliative ). It can make predictions about future events or as-yet unobserved phenomena.

For example, Albert Einstein observed the movement of a pocket compass when he was five years old and became fascinated with the idea that something invisible in the space around the compass needle was causing it to move. This observation, combined with additional observations (of moving trains, for example) and the results of logical and mathematical tools (deduction), resulted in a rule that fit his observations and could predict events that were as yet unobserved.

- << Previous: Deduction
- Next: Abduction >>
- Last Updated: Jul 28, 2020 5:53 PM
- URL: https://library.sewanee.edu/critical_thinking

Research Tools

- Find Articles
- Find Research Guides
- Find Databases
- Ask a Librarian
- Learn Research Skills
- How-To Videos
- Borrow from Another Library (Sewanee ILL)
- Find Audio and Video
- Find Reserves
- Access Electronic Resources

Services for...

- College Students
- The School of Theology
- The School of Letters
- Community Members

Spaces & Places

- Center for Leadership
- Center for Speaking and Listening
- Center for Teaching
- Floor Maps & Locations
- Ralston Listening Library
- Research Help
- Study Spaces
- Tech Help Desk
- Writing Center

About the Library

- Where is the Library?
- Library Collections
- New Items & Themed Collections
- Library Policies
- Library Staff
- Friends of the Library

Jessie Ball duPont Library, University of the South

178 Georgia Avenue, Sewanee, TN 37383

931.598.1664

## Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

- Knowledge Base

Methodology

## Inductive Reasoning | Types, Examples, Explanation

Published on January 12, 2022 by Pritha Bhandari . Revised on June 22, 2023.

Inductive reasoning is a method of drawing conclusions by going from the specific to the general. It’s usually contrasted with deductive reasoning , where you go from general information to specific conclusions.

Inductive reasoning is also called inductive logic or bottom-up reasoning.

Note Inductive reasoning is often confused with deductive reasoning. However, in deductive reasoning, you make inferences by going from general premises to specific conclusions.

## Table of contents

What is inductive reasoning, inductive reasoning in research, types of inductive reasoning, inductive generalization, statistical generalization, causal reasoning, sign reasoning, analogical reasoning, inductive vs. deductive reasoning, other interesting articles, frequently asked questions about inductive reasoning.

Inductive reasoning is a logical approach to making inferences, or conclusions. People often use inductive reasoning informally in everyday situations.

You may have come across inductive logic examples that come in a set of three statements. These start with one specific observation, add a general pattern, and end with a conclusion.

Stage | Example 1 | Example 2 |
---|---|---|

Specific observation | Nala is an orange cat and she purrs loudly. | Baby Jack said his first word at the age of 12 months. |

Pattern recognition | Every orange cat I’ve met purrs loudly. | All babies say their first word at the age of 12 months. |

General conclusion | All orange cats purr loudly. | All babies say their first word at the age of 12 months. |

## Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

- Academic style
- Vague sentences
- Style consistency

See an example

In inductive research, you start by making observations or gathering data. Then , you take a broad view of your data and search for patterns. Finally, you make general conclusions that you might incorporate into theories.

You distribute a survey to pet owners. You ask about the type of animal they have and any behavioral changes they’ve noticed in their pets since they started working from home. These data make up your observations.

To analyze your data, you create a procedure to categorize the survey responses so you can pick up on repeated themes. You notice a pattern : most pets became more needy and clingy or agitated and aggressive.

Inductive reasoning is commonly linked to qualitative research , but both quantitative and qualitative research use a mix of different types of reasoning.

There are many different types of inductive reasoning that people use formally or informally, so we’ll cover just a few in this article:

Inductive reasoning generalizations can vary from weak to strong, depending on the number and quality of observations and arguments used.

Inductive generalizations use observations about a sample to come to a conclusion about the population it came from.

Inductive generalizations are also called induction by enumeration.

- The flamingos here are all pink.
- All flamingos I’ve ever seen are pink.
- All flamingos must be pink.

Inductive generalizations are evaluated using several criteria:

- Large sample: Your sample should be large for a solid set of observations.
- Random sampling: Probability sampling methods let you generalize your findings.
- Variety: Your observations should be externally valid .
- Counterevidence: Any observations that refute yours falsify your generalization.

## Prevent plagiarism. Run a free check.

Statistical generalizations use specific numbers to make statements about populations, while non-statistical generalizations aren’t as specific.

These generalizations are a subtype of inductive generalizations, and they’re also called statistical syllogisms.

Here’s an example of a statistical generalization contrasted with a non-statistical generalization.

Specific observation | 73% of students from a sample in a local university prefer hybrid learning environments. | Most students from a sample in a local university prefer hybrid learning environments. |
---|---|---|

Inductive generalization | 73% of all students in the university prefer hybrid learning environments. | Most students in the university prefer hybrid learning environments. |

Causal reasoning means making cause-and-effect links between different things.

A causal reasoning statement often follows a standard setup:

- You start with a premise about a correlation (two events that co-occur).
- You put forward the specific direction of causality or refute any other direction.
- You conclude with a causal statement about the relationship between two things.
- All of my white clothes turn pink when I put a red cloth in the washing machine with them.
- My white clothes don’t turn pink when I wash them on their own.
- Putting colorful clothes with light colors causes the colors to run and stain the light-colored clothes.

Good causal inferences meet a couple of criteria:

- Direction: The direction of causality should be clear and unambiguous based on your observations.
- Strength: There’s ideally a strong relationship between the cause and the effect.

Sign reasoning involves making correlational connections between different things.

Using inductive reasoning, you infer a purely correlational relationship where nothing causes the other thing to occur. Instead, one event may act as a “sign” that another event will occur or is currently occurring.

- Every time Punxsutawney Phil casts a shadow on Groundhog Day, winter lasts six more weeks.
- Punxsutawney Phil doesn’t cause winter to be extended six more weeks.
- His shadow is a sign that we’ll have six more weeks of wintery weather.

It’s best to be careful when making correlational links between variables . Build your argument on strong evidence, and eliminate any confounding variables , or you may be on shaky ground.

Analogical reasoning means drawing conclusions about something based on its similarities to another thing. You first link two things together and then conclude that some attribute of one thing must also hold true for the other thing.

Analogical reasoning can be literal (closely similar) or figurative (abstract), but you’ll have a much stronger case when you use a literal comparison.

Analogical reasoning is also called comparison reasoning.

- Humans and laboratory rats are extremely similar biologically, sharing over 90% of their DNA.
- Lab rats show promising results when treated with a new drug for managing Parkinson’s disease.
- Therefore, humans will also show promising results when treated with the drug.

Inductive reasoning is a bottom-up approach, while deductive reasoning is top-down.

In deductive reasoning, you make inferences by going from general premises to specific conclusions. You start with a theory, and you might develop a hypothesis that you test empirically. You collect data from many observations and use a statistical test to come to a conclusion about your hypothesis.

Inductive research is usually exploratory in nature, because your generalizations help you develop theories. In contrast, deductive research is generally confirmatory.

Sometimes, both inductive and deductive approaches are combined within a single research study.

Inductive reasoning approach

You begin by using qualitative methods to explore the research topic, taking an inductive reasoning approach. You collect observations by interviewing workers on the subject and analyze the data to spot any patterns. Then, you develop a theory to test in a follow-up study.

Deductive reasoning approach

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

- Chi square goodness of fit test
- Degrees of freedom
- Null hypothesis
- Discourse analysis
- Control groups
- Mixed methods research
- Non-probability sampling
- Quantitative research
- Inclusion and exclusion criteria

Research bias

- Rosenthal effect
- Implicit bias
- Cognitive bias
- Selection bias
- Negativity bias
- Status quo bias

Inductive reasoning is a method of drawing conclusions by going from the specific to the general. It’s usually contrasted with deductive reasoning, where you proceed from general information to specific conclusions.

In inductive research , you start by making observations or gathering data. Then, you take a broad scan of your data and search for patterns. Finally, you make general conclusions that you might incorporate into theories.

Inductive reasoning takes you from the specific to the general, while in deductive reasoning, you make inferences by going from general premises to specific conclusions.

There are many different types of inductive reasoning that people use formally or informally.

Here are a few common types:

- Inductive generalization : You use observations about a sample to come to a conclusion about the population it came from.
- Statistical generalization: You use specific numbers about samples to make statements about populations.
- Causal reasoning: You make cause-and-effect links between different things.
- Sign reasoning: You make a conclusion about a correlational relationship between different things.
- Analogical reasoning: You make a conclusion about something based on its similarities to something else.

## Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2023, June 22). Inductive Reasoning | Types, Examples, Explanation. Scribbr. Retrieved July 10, 2024, from https://www.scribbr.com/methodology/inductive-reasoning/

## Is this article helpful?

## Pritha Bhandari

Other students also liked, inductive vs. deductive research approach | steps & examples, exploratory research | definition, guide, & examples, correlation vs. causation | difference, designs & examples, what is your plagiarism score.

## Pursuing Truth: A Guide to Critical Thinking

Chapter 14 inductive arguments.

The goal of an inductive argument is not to guarantee the truth of the conclusion, but to show that the conclusion is probably true. Three important kinds of inductive arguments are

- Inductive generalizations,
- Arguments from analogy, and
- Inferences to the best explanation.

## 14.1 Inductive Generalizations

Sometimes, we want to know something about some group, but we don’t have access to the entire group. This may be because the group is too large, we can’t reach some members of the group, etc. So, we instead study a subset of that group. Then, we infer that the entire group is probably like the subset. The group we are interested in is called the population, and the observed subset of the population is called the sample.

Imagine that I wanted to know the level of current student satisfaction with access to administration at the university. I would probably survey students to get this information. The population would be students currently enrolled at the university, and the sample would be students who were surveyed. The sample is guaranteed to be a subset of the population, since, even if I give every student a chance to take the survey, we know that not all students will participate. Some students will return the survey, giving me an answer for the sample. I then conclude that the answer for the population is about what it is for the sample.

There are some terms that are important to know when dealing with data values. The mean is the mathematical average. To find the mean, add up all the values of the data points and divide by the number of data points. For example, the mean of 1, 2, 3, 5, 9 is 4. The median is the value that is in the center, such that half of the numbers are less than it and have are greater. In this case, the median is 3. The mode is the value that occurs most often. The mode of 1, 2, 4, 2, 7, 2 is 2.

Another thing that is important to keep in mind is how spread out the values are. The average annual temperature in Oklahoma City is about the same as the average annual temperature in San Diego, leading one to conclude that the two cities have about the same comfort level. The difference is that the average monthly highs and lows range from 45 to 76 in San Diego and 29 to 94 in Oklahoma City. Three ways to talk about data dispersal are

- Range: the distance between the greatest and the smallest value,
- Percentile rank: the percentage of values that fall below some value, and
- Standard deviation: how closely things are grouped the mean.

## 14.1.1 Random Samples

In an inductive generalization, the premises will be claims about the sample, and the conclusion will be a claim about the population. Although such arguments are not valid, they can be inductively strong if the sample is good. Good samples are first, not too small, and second, not biased. The ideal sample is representative, which means that it matches the population in every respect. Of course, reasoning from a representative sample to a population would always be perfect, since they would be, except for size, mirror images of each other. Unfortunately, there is no way to guarantee that a sample is representative, nor is there any way, presumably, to know that a sample is representative. To know that our sample was representative, we would already have to know everything about the population. If that were the case, what’s the use taking a sample?

Since we can’t do anything to guarantee a representative sample, our best way to ensure our sample is not biased is for it to be random. A random sample is one such that every member of the population had an equal chance of being included in the sample. Randomness is very difficult to achieve in practice. For example, if I send out an email invitation to participate in the university survey, it looks like every student has an equal chance of being included in the sample. Actually, though, there are several groups that are guaranteed to not be included: students who have forgotten their email password, students who don’t check email, students who don’t really care, etc. Even if I have a truly random sample, it is still possible for it to be a biased sample. This is called random sampling error. Random samples, though, are less likely to be biased than non-random samples.

## 14.1.2 Margins of Error

The other feature of a good sample is that it needs to be big enough. How big is big enough? It often depends on what we want to know and the result that we get from the sample. This is because of something called the margin of error. Let’s assume that I have a random sample from a population. I get a value from the population, and I can be pretty confident that the value in the population is within the margin of error from the value in the sample. How confident? It depends on how big the margin of error is.

Does this sound confusing? It’s really not. Imagine that a friend is coming to visit you at your home on Monday. You, wanting to be prepared, asked her when she would arrive. Here are some possible responses that she might give:

- “Exactly 9:00”
- “About 9:00”
- “Sometime Monday morning”
- “Sometime on Monday”

Now, which of these can you be most confident is true? It’s easy to see that the first is the one in which we should be the least confident, and the fourth is one in which we should be the most confident. The first is very precise, and then the answers become increasingly more vague, and thus more likely to be true. Margins of error function the same way. The greater the margin of error, the more vague the claim. The more vague the claim, the greater the likelihood of being true.

There is a trade-off, though. Your friend could tell you that she will be there sometime this year. That’s very likely to be true, but not very helpful, because it’s so imprecise. The trade-off is between precision and likelihood. The more precise the claim, the less likely it is to be true. What we need to find is the best balance between the two.

For inductive generalizations, precision is a function of the margin of error. Likelihood is expressed by something called the confidence level. The confidence level of a study is a measure of how confident we can be that the right answer in the population is within the margin of error of the value in the sample. Here is a chart with confidence levels and their respective margins of error, expressed in standard deviations (SD).

Margin of Error | Confidence Level |
---|---|

1 SD | 67% |

2 SD | 95% |

3 SD | 99% |

So, if my margin of error is \(\pm 1\) standard deviation, then I can be 67% confident that the value in the population is within that margin of error. If I increase the margin of error by another standard deviation, my confidence level leaps a whole 32% from 67% to 95%. Increasing it by another margin of error only gives me an additional 4% confidence level. So, the best balance between likelihood and precision seems to be at the 95% confidence level, and most, if not almost all, studies are done at the 95% confidence level.

The margin of error is a function of the sample size. As the sample size gets larger, the margin of error gets smaller. Statisticians use complicated formulas to calculate standard deviations and margins of error. If the population is very large, though, we can estimate them fairly simply: \(1 SD = \frac{1}{2 \times \sqrt{N}}\) , where \(N\) is the sample size. So, at the 95% confidence level, the margin of error is $. This gives us the following margins of error for a few, easy to calculate, sample sizes:

Sample Size | Margin of Error |
---|---|

100 | \(\pm 10\) |

400 | \(\pm 5\) |

10,000 | \(\pm 1\) |

Remember when I said that how large a sample needs to be depended on what we wanted to know and the result we got from the sample? Now, that should make more sense. Let’s say you were conducting a survey to determine which of two candidates were going to win an upcoming election. You somehow managed to get a random sample of 100, 70% of whom were going to vote for candidate A. So, you conclude that between 60% and 80% of the population were going to vote for candidate A. Since your range does not overlap the 50% mark, you rightfully conclude that candidate A will win. Now, had 55% of your sample intended to vote for candidate A, you could only infer that between 45% and 65% of the population intended to vote for that candidate. To conclude something definite, you will need to shrink the margin of error, which means that you’ll need to increase your sample size.

## 14.1.3 Bad Samples

Since a good sample is unbiased and large enough, there are two ways for samples to be bad. Generalizing from sample that is too small is called committing the fallacy of hasty generalization. Here are some examples of hasty generalizations:

- I’ve been to two restaurants in this city and they were both bad. There’s nowhere good to eat here.
- Who says smoking is bad for you? My grandfather smoked a pack a day and live to be 100!

Cases like the second example are often called fallacies of anecdotal evidence. This happens when evidence is rejected because of a few first-hand examples. (I know someone who had a friend who…)

We’re often not very aware of the need for large enough samples. For example, consider this question:

A city has two hospitals, one large and one small. On average, 6 babies are born a day in the small hospital, while 45 are born a day in the large hospital. Which hospital is likely to have more days per year when over 70% of the babies born are boys?

- The large hospital
- The small hospital
- Neither, they would be about the same.

The answer is “the small hospital.” Think of this as a sampling problem. Overall, in the world, the number of boys born and girls born is roughly the same. 9 A larger sample is more likely to be close to the actual value than a smaller sample, so the small hospital is more likely to have more days when the births are skewed one way or another.

We’ll call drawing a conclusion from a biased sample the fallacy of biased generalization. 10 Imagine a study in which 1,000 different households were randomly chosen to be called and asked about the importance of regular church attendance. The result was that only 15% of the families surveyed said that regular church attendance was important. On the surface, it seems that a study like this would be good — it’s certainly large enough and the families were chose randomly. Let’s imagine that the phone calls were made between 11:00 and 12:00 on Sunday morning? Would that make a difference?

The classic example is the 1936 U.S. presidential election, in which Alfred Landon, the Republican governor of Kansas, ran against the incumbent, Franklin D. Roosevelt. The Literary Digest conducted one of the largest and most expensive pools ever done. They used every telephone directory in the country, lists of magazine subscribers, and membership rosters of clubs and associations to create a mailing list of 10 million names. Everyone on the list was sent a mock ballot that they were asked to complete and return to the magazine. The editors of the magazine expressed great confidence that they would get accurate results, saying, in their August 22 issue,

Once again, [we are] asking more than ten million voters – one out of four, representing every county in the United States – to settle November’s election in October.

Next week, the first answers from these ten million will begin the incoming tide of marked ballots, to be triple-checked, verified, five-times cross-classified and totaled. When the last figure has been totted and checked, if past experience is a criterion, the country will know to within a fraction of 1 percent the actual popular vote of forty million [voters].

2.4 million people returned the survey and the magazine predicted that Landon would get 57% of the vote to Roosevelt’s 43%.

The election was a landslide victory for Roosevelt. He got 62% of the vote Landon’s 38%. What went wrong?

The problem wasn’t the size of the sample, although only 24% of the surveys were returned, 2.4 million is certainly large enough for an accurate result. There were two problems. The first was that 1936 was the end of the Great Depression. Telephones, magazine subscriptions, and club memberships, were all luxuries. So, the list that the magazine generated was biased to upper and middle-class voters.

The second problem was that the survey was self-selected. In a self-selected survey, it is the respondents who decide if they will be included in the sample. Only those who care enough to respond are included. Local news stations often do self-selected surveys. They will ask a question during the broadcast, then have two numbers to dial, one for “Yes” and another for “No.” There’s never a number for “Don’t really care,” because those people wouldn’t bother calling in anyway. The 1936 survey failed to include people who didn’t care enough to respond to the survey, but they very well might have cared enough to vote.

## 14.1.4 Bad Polls

Good surveys are notoriously difficult to construct. There are a number of ways that surveys can be self selected — think of what you do when you see someone standing in the mall holding a clipboard. Caller ID now makes telephone surveys self-selected. If your caller ID read, “ABC Survey Company,” would you answer the phone?

Today, telephone surveys are almost guaranteed to be biased. Most telephone surveys are conducted by calling traditional “landline” phones, not mobile phones. More and more, though, people are rejecting such phones in favor of only having mobile phones. So, by having a telephone survey, pollsters are limiting their responses to mostly older generations.

Another example of a bad poll is the push-poll. Here, the goal is not to pull information from the sample, but to push information to the people in the sample. A few years ago, I received a call from the National Rifle Association today. A recorded message from the NRA Executive Vice-President concerning the U.N. Small Arms Treaty was followed by the following single question survey:

Do you think it’s OK for the U.N. to be on our soil attacking our gun rights?

I was instructed to press “1” if I did not think was OK for the U.N. to be on our soil attacking our gun rights. That was followed by a repeat instruction to press “1” if I did not think it was OK. I was then instructed to press “2” if I did think was OK for the U.N. to attack our gun rights. (Note that I was only given that instruction once.)

This survey was a classic example of a push-poll. It was designed simply to push a message out to the population. This is evident from the question. What useful information do we expect to gain from asking people if they think it’s OK for the U.N to attack our gun rights. Do we really not know how people will answer that question? It’s no different from my polling my students to find out if they would like to get out of class early. As far as information gathering goes, it’s a complete waste of time and money. For propaganda pushing, on the other hand, it’s very effective.

This is also a good example of a slanted question. When I looked up the purpose of the U.N. Small Arms Treaty, it’s stated purpose was to keep firearms out of the hands of terrorists. If the question had been, “Do you think it’s OK that the U.N. negotiate a treaty designed to prevent guns from falling into the hands of terrorists?” I would expect a very different result.

One reason it is very difficult to construct good surveys is because of order effects. The order that questions appear in affects how people will respond to them. A study conducted a survey that included these two questions:

- Should U.S. allow reporters from a fundamentalist country like Iraq come here and send back reports of the news as they see it to their country?
- Should an Islamic Fundamentalist country like Iraq let US news reporters come in and send back reports of the news as they see it to the US?

When question 1 was asked first, 55% of respondents said yes. When question 1 was asked second, however, 75% of the respondents answered yes. What seems to happen here is a basic commitment to fairness. Once I have already said that other countries should let in our reporters, then there’s no fair reason for me not to allow their reporters into my country.

To summarize, here is a list of bad polls:

- Self-selected
- Ignore order effects
- slanted questions
- loaded questions

## 14.2 Arguments from Analogy

Another common type of inductive argument is the argument from analogy. Let’s say that you are shopping for a car, so that you can have transportation to school, work, and so on. Since it’s important that you get to the places on time, you need to buy a reliable car. You find a good deal on a 2013 Honda Civic, but how do you know that it will be reliable? One way to judge reliability is to look at reliability reports from owners of other 2013 Honda Civics. The more cases in which they reported that their cars were reliable, the more you can conclude that yours will be also.

With inductive generalizations, we were reasoning from a sample to a population. Arguments from analogy reason from a sample to another individual member of the population, called the target. The members of the sample have a number of properties in common; they are all Honda Civics made in 2013. They also have another property in common that we will call the property in question, in this case, reliability. Our target has all of the other properties, so it probably also have the property in question. The more similar our target is to the sample in some respects, the more similiar it is likely to be in other respects. Here is the basic structure:

- members of s have properties \(P₁… Pₙ\) and \(P_Q\) .
- The target has \(P₁… Pₙ\) .
- The target probably also has \(P_Q\)

These arguments are weak when

- The similarities stated aren’t relevant to the property in question. (In our example, the color of the car would not be relevant to its reliability.)
- There are relevant dissimilarities. (If all the members of the sample had excellent maintenance records, but our target had very poor maintenance, then we wouldn’t expect the target to be reliable just because the members of the sample were.)
- There are instances of the sample that do not have the property in question. (The more 2013 Honda Civics we find that are unreliable, the weaker the argument becomes.)

So, the arguments are stronger when there are

- More relevant similarities,
- Fewer relevant dissimilarities, and
- Fewer known instances of things that have the shared properties but lack the property in question.

## 14.3 Inferences to the Best Explanation

Our final type of inductive argument to discuss in this chapter is the inference to the best explanation, also called abductive reasoning. Very simply, this is used when we have a situation that needs explanation. You consider the possible explanations, and it’s rational for you to believe the best one.

How do we decide which explanation is best, though? Here are some critiera:

- It must explain the data, that is, tell us why the data is true.
- Of the good explanations, be the best.

There are slightly more boys born than girls. Worldwide, the ratio of boys to girls is 107:100. This is partially explained by sex-selective abortion in countries where sons are more desired than daughters. If we eliminate those cases, the ration is still 105:100. ↩︎

There is no general agreement on this. Sometime “hasty generalization” is used for both. I think it’s useful to have two terms to distinguish the two different errors. ↩︎

## An Introduction to Critical Thinking and Creativity: Think More, Think Better by

Get full access to An Introduction to Critical Thinking and Creativity: Think More, Think Better and 60K+ other titles, with a free 10-day trial of O'Reilly.

There are also live events, courses curated by job role, and more.

INDUCTIVE REASONING

Consider these two arguments:

These arguments are of course not valid. Lee might be among the 7% of Chinese who can digest lactose. 1 Snow might fall in Jakarta this winter due to unusual changes in global weather. But despite the fact that the arguments are invalid, their conclusions are more likely to be true than false given the information in the premises. If the premises are indeed true, it would be rational for us to be highly confident of the conclusion, even if we are not completely certain of their truth. In other words, it is possible for the premises of an invalid argument to provide strong support for its conclusion. Such arguments are known as inductively strong arguments. We might define an inductively strong argument as one that satisfies two conditions:

1. It is an invalid argument.

2. The conclusion is highly likely to be true given that the premises are true.

Let us elaborate on this definition a bit more:

- Recall that a valid argument can have false premises. The same applies to an inductively strong argument. The two arguments given earlier remain inductively strong, even if Lee is not Chinese, or it turns out that it snowed in Jakarta last year.
- When we say the conclusion is highly likely to be true given that the premises are true, it does not mean “it is highly likely for the conclusion and the premises to be true.” Consider ...

Get An Introduction to Critical Thinking and Creativity: Think More, Think Better now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.

## Don’t leave empty-handed

Get Mark Richards’s Software Architecture Patterns ebook to better understand how to design components—and how they should interact.

## It’s yours, free.

## Check it out now on O’Reilly

Dive in for free with a 10-day trial of the O’Reilly learning platform—then explore all the other resources our members count on to build skills and solve problems every day.

Critical Thinking |

## IMAGES

## VIDEO

## COMMENTS

In fact, inductive reasoning usually comes much more naturally to us than deductive reasoning. Inductive reasoning moves from specific details and observations (typically of nature) to the more general underlying principles or process that explains them (e.g., Newton's Law of Gravity). It is open-ended and exploratory, especially at the beginning.

Inductive reasoning begins with observations that are specific and limited in scope, and proceeds to a generalized conclusion that is likely, but not certain, in light of accumulated evidence. You could say that inductive reasoning moves from the specific to the general. Much scientific research is carried out by the inductive method: gathering ...

Examples: Inductive reasoning. Nala is an orange cat and she purrs loudly. Baby Jack said his first word at the age of 12 months. Every orange cat I've met purrs loudly. All observed babies say their first word at the age of 12 months. All orange cats purr loudly. All babies say their first word at the age of 12 months.

This helps you develop critical thinking skills, which are essential for making sound judgments (Shin, 2019). Problem-solving: In many problem-solving scenarios, especially those with incomplete information, inductive reasoning allows us to draw reasonable conclusions based on the available evidence. It helps us make informed decisions or solve ...

Chapter 14 Inductive Arguments. Chapter 14. Inductive Arguments. The goal of an inductive argument is not to guarantee the truth of the conclusion, but to show that the conclusion is probably true. Three important kinds of inductive arguments are. Inductive generalizations, Arguments from analogy, and. Inferences to the best explanation.

One of the basic theories of modern biology, cell theory, is a product of inductive reasoning. It states that because every organism that has been observed is made up of cells, it is most likely that all living things are made up of cells. There are two forms of inductive arguments. Those that compare one thing, event, or idea to another to see ...

Welcome to the Course. Module 1 • 14 minutes to complete. Welcome to Think Again: How to Reason Inductively! This course is the third in the specialization Introduction to Logic and Critical Thinking, based on our original Coursera course titled Think Again: How to Reason and Argue. We are excited that you are taking this course, and we hope ...

Critical & Creative Thinking - OER & More Resources: Inductive Arguments Understanding, for better or worse, starts with entertaining the idea that something is true. The brain tries to make thinking easier by creating heuristics, but sometimes these are inaccurate.

There is one logic exercise we do nearly every day, though we're scarcely aware of it. We take tiny things we've seen or read and draw general principles from them—an act known as inductive reasoning. This form of reasoning plays an important role in writing, too. But there's a big gap between a strong inductive argument and a weak one.

CHAPTER 10 INDUCTIVE REASONING Consider these two arguments: These arguments are of course not valid. Lee might be among the 7% of Chinese who can digest lactose.1 Snow might … - Selection from An Introduction to Critical Thinking and Creativity: Think More, Think Better [Book]

Inductive Strength (See pgs. 78-83) A. You should understand the concept of inductive strength.To this end, you should . . . 1. understand that a strong inductive argument is one in which the conclusion follows probably from the premises;. 2. understand that a weak inductive argument is an argument in which the conclusion does not follow probably from the premises;

Inductive reasoning is any of various methods of reasoning in which broad generalizations or principles are derived from a body of observations. This article is concerned with the inductive reasoning other than deductive reasoning (such as mathematical induction), where the conclusion of a deductive argument is certain given the premises are correct; in contrast, the truth of the conclusion of ...

21 Arguments VI: Inductive Arguments . I. Introduction The last chapter introduced the distinction between deductive and inductive arguments. Deductive arguments are those whose conclusion is supposed to follow with logical necessity from the premises, while inductive arguments are those that aim to establish a conclusion as only being probably true, given the premises.

Chapter. Logic: inductive force Inductive force • 'All', 'most' and 'some' • Soft generalisations • Inductive soundness • Probability in the premises • Arguments with multiple probabilistic premises • Inductive force in extended arguments • Conditional probability in the conclusion • Evidence • Inductive inferences ...

Inductive reasoning is an important critical thinking skill that many employers look for in their employees. Inductive reasoning is an example of an analytical soft skill . Unlike hard skills , which are job-specific and generally require technical training, soft skills relate to how you interact with people, social situations, and ideas.

Introduction In the previous post we talked about logical force or logical consequence (they are interchangeable). These terms refer to the degree to which we must accept the conclusion if we've assumed the premises to be true.. When an argument has maximum logical force we say it is valid.. Generally, there are two types of logical force: deductive and inductive.

Critical Thinking This is the best single text I have seen for addressing the level, presumptions, and interests of the non-specialist. The authors have a ﬁne knack for articulating simply and clearly the most elementary - but also the most important - aspects of critical thinking in a way that should be clear to the novice.

1. An inductive argument, not well justified. 2. Also inductive reasoning, probably justified. 3. Deductive logic (an absolute claim) not well justified. No matter what theorythe Daily Mail supports 4. Also deductive and not well jusified. 5. Inductive argument which is not well jsutified, hard peices take longer to learn.

Here are how the definitions differ from each other: Inductive reasoning: Inductive thinking uses experience and proven observations to guess the outcome. The goal is to predict a likely outcome. Deductive reasoning: Deductive reasoning uses theories and beliefs to rationalize and prove a specific conclusion. The goal is to prove a fact.

Continues our coverage of the concepts central to this book, this time for the analysis and assessment of inductive arguments: inductive force and inductive soundness. We also discuss inductive inferences and degrees of probability. We provide a sketch of the connection between induction as discussed here and the mathematics of probability ...

Critical thinking : a concise guide by Bowell, Tracy, 1965-Publication date 2005 ... Logic : inductive force -- Rhetorical ploys and fallacies -- The practice of argument reconstruction -- Issues in argument assessment -- Truth, knowledge and belief Includes bibliographical references and index Access-restricted-item true

Now we could make the argument valid by adding the premise, 'Everyone in Inverness owns at least one woollen item of clothing'. Thus: P1) Fiona lives in Inverness. P2) Everyone in Inverness owns at least one woollen item of clothing. C) Fiona owns at least one woollen item of clothing. ♦. Inductive force. Continue reading here: Rhetorical ...

logic: inductive force . DOI link for logic: inductive force. ... Book Critical Thinking. Click here to navigate to parent product. Edition 3rd Edition. First Published 2009. Imprint Routledge. Pages 29. eBook ISBN 9780203874134. Share. ABSTRACT . Is this argument valid? You might think so. Inverness is a pretty cold place. And wool is ...

The Senate Armed Services Committee holds hearings from 9 a.m. ET on July 11 to examine the nominations of Tonya Parran Wilkerson, of Maryland, to be undersecretary for intelligence and security ...