The Sociological Imagination

By c. wright mills, the sociological imagination essay questions.

In what sense does C. Wright Mills think men experience life today as a “series of traps”?

For Mills, men today go through a series of confined spaces. There’s the workplace, the home, etc. Each is a trap because you’re told what to do in each space. But life is also a trap because men feel powerless to affect the decisions that impact their lives. This isn’t only at work, but also in politics. The everyday man worries about nuclear bombs, for instance, but doesn’t have a role to play in the decisions related to the Cold War.

According to Mills, how is contemporary sociology complicit with bureaucracy?

For Mills, one branch of sociology, which he calls abstracted empiricism, is itself bureaucratic. By emphasizing the repetitive task of polling large samples of people, sociology takes on the bureaucratic ideals of efficiency rather than truth. By studying how to be more efficient, sociology also helps bureaucracies­—sociology’s “clients”—extract more from their employees or citizens. Instead of serving the common man, sociology of this kind serves the common man’s boss.

What are the main critiques Mills has of Talcott Parsons?

Parsons is, for Mills, the prime example of “grand theory.” There are two main faults with this kind of theory. The first is that it is overly complicated in its language, using big words and long passages when the ideas are actually quite simple and could be conveyed in simpler prose. The second is that the work is so theoretical, thinking in general and universal terms like “human nature,” that it cannot actually explain what real people do in real life.

What, according to Mills, should good social science incorporate and do?

What Mills calls “classical social science,” and which he advocates, always includes three things. The first is biography, or the study of men’s private problems. The second is social structure, or the institutions of a society and how they are related. The third is history, or how societies are different from each other across time and place. Good social science, according to Mills, includes all three of these at once, connecting personal “milieu” with public social structures.

What are the politics of doing classical social science, according to Mills?

Mills tracks the history of sociology back to mid-19th century reform movements. Sociology is then, at its beginning, a liberal program. In the 19th century, it framed the private problems of working class people as public issues for the middle classes to help solve. Today, Mills says, social science can regain its liberal politics by addressing itself to a public and helping men see how social structures impact their lives. Then sociology can help society achieve democracy, defined as when everyone gets to participate in the decisions that affect their lives.

GradeSaver will pay $15 for your literature essays

The Sociological Imagination Questions and Answers

The Question and Answer section for The Sociological Imagination is a great resource to ask questions, find answers, and discuss the novel.

Explain in details how sociological imagination helps one to develop better understanding of the society and social problem

The Sociological Imagination is C. Wright Mills’s 1959 statement about what social science should be and the good it can produce. In this way, it is a polemical book. It has a vision for sociology, and it criticizes those with a different vision....

Now the world knowledge answer what understanding answer why wisdom answer how what does imagination answer

Imagination answers creation

sociological imagination

I think this is asking about you rather than me.

Study Guide for The Sociological Imagination

The Sociological Imagination study guide contains a biography of C. Wright Mills, literature essays, quiz questions, major themes, characters, and a full summary and analysis.

  • About The Sociological Imagination
  • The Sociological Imagination Summary
  • Character List

Wikipedia Entries for The Sociological Imagination

  • Introduction
  • Grand theory
  • Abstracted empiricism
  • The human variety

sociological imagination essay question

PrepScholar

Choose Your Test

Sat / act prep online guides and tips, what is sociological imagination how can you use it.

author image

General Education

feature_gulls

Have you ever wondered why your family cooks turkey on Thanksgiving? If you ask, you might get all kinds of reasons: because it’s tradition, because it tastes good, because it’s what the pilgrims ate back in the early days of America. All of those factors—taste, personal history, and world history—lead to one small action of you eating turkey on a holiday.

That’s the premise of sociological imagination. Like imagination in the more typical sense, the sociological imagination asks us to use our brains to think differently about things and consider why we do the things we do.

In this article, we’ll introduce the concept of sociological imagination, its history, how it changed the sociological field, and how you can use it every day to change your way of thinking about the world.

What Is Sociological Imagination?

The sociological imagination is a method of thinking about the world. As you may have guessed, it’s part of the field of sociology, which studies human society.

When you put “sociological”—studying society—and “imagination”—the concept of forming new ideas, often creatively—together, you get a pretty good definition of the concept: a method of thinking about both individuals and society by considering a variety of sociological contexts. 

The societal imagination encourages people to think about their lives not just on an individual level, but also considering societal, biological, and historical context. Societal context tells us about our culture—when we consider it, we think about how our desires, actions, and thoughts are shaped by our community and how that community is changing. Biological context tells us about how “human nature” impacts our desires and needs. And lastly, historical context considers our place in time; how have events of the past led up to where we are currently?

Basically, the concept of sociological imagination suggests that who you are as an individual is also the you shaped by your immediate surroundings, your family, your friends, your country, and the world as a whole. You may make individual choices about what to eat for lunch, but what you choose—a tuna sandwich, lobster ravioli, or shrimp tacos—is also determined by societal factors like where you live and what you’ve grown up eating.

To use the sociological imagination is to shift your perspective away from yourself and look at things more broadly, bringing in context to individual actions.

If you’re thinking about lunch, you’re probably more likely to choose something that’s familiar to you. In another culture or even another part of your city, a person who is very similar to you might choose a different food because of what’s familiar to them. If we zoom out a little further, we might realize that people in landlocked states might be unlikely to choose a seafood-based lunch at all because fresh fish is more expensive than it is on the coast. Zoom out more, and you might realize that fish isn’t even on the menu for some cultures because of societal taboos or restrictions.

And those are just spatial boundaries. You can also consider your family’s relationship with eating fish, or how your cultural and ethnic heritage impacted where you are, what food you have access to, and your personal tastes. All of this lets you see yourself and your culture in a new light, as a product of society and history.

In this sense, using a sociological imagination lets you look at yourself and your culture as a third-party observer. The goal is not to be dispassionate and distant, but rather to see yourself not as “natural” or “normal,” as a part of larger systems, the same way that all people are.

body_people

Why the Sociological Imagination Is Useful

Part of the appeal of using a sociological imagination is that it helps people avoid apathy . In this context, apathy refers to a sense of indifference or disinterest in examining the morality of their leaders. According to C. Wright Mills , creator of the idea of sociological imagination, if we accept that our beliefs, traditions, and actions are all normal and natural, we are less likely to interrogate when our leaders and community members do things that are immoral.

Considering sociological context allows individuals to question and change society rather than just live in it. When we understand historical and social contexts, we’re better equipped to look at our actions and the actions of our community as a result of systems—which can be changed—rather than as inherent to humanity.

In more technical terms, Mills was challenging the dominant structural functionalist approach to sociology. Structural functionalism suggests that society is composed of different structures that shape the interactions and relationships between people, and those relationships can be understood and analyzed to help us learn more about a society.

What differed for Mills and his concept of the sociological imagination was that he believed that society was not only a series of systems, but that the role of the individual should also be considered. In fact, Mills believed that social structures arise because of conflict between groups , typically the elite and the others, such as the government and the citizens or the rich and the poor.

body_mills

Where Does the Term Come From?

As previously mentioned, C. Wright Mills is the origin of the term “sociological imagination.” In his 1959 book The Sociological Imagination , the Columbia University professor of sociology suggested that sociologists rethink the way they were engaging with the field. During his time, many sociologists engaged in a sort of top-down view of the world, focusing on systems rather than on individuals. Mills believed both were important, and that society should be understood as a relationship between different systems that originated in conflict.

Though his book has since been named one of the most important sociological texts of the 20th century, Mills was not popular among his contemporaries. Mills was particularly concerned with class in social spheres, particularly the elite and the military, and how conflict between the elite and the non-elite impacted the actions of individuals and vice-versa.

Mills was also opposed to the tendency of sociologists to observe rather than act. He believed that sociology was a great tool for changing the world, and believed that using the sociological imagination encouraged people of all kinds, including sociologists, to expose and respond to social injustice.

Mills referred to the tendency of sociologists to think in abstraction “grand theory.” This tendency led to sociologists of the time being more concerned with organization and taxonomy over understanding—because Mills was so concerned with the experience of the individual as well as the experience of the whole, this contributed to his feeling that the sociological field was too far removed from the actual humans that comprise society.

Because so much of Mills’ ideas of the sociological imagination were intended to bring sociologists closer to the people and their concerns, he developed a series of tenets to encourage them to think differently.

body_homeless-1

Mills’ Sociological Imagination Tips

Mills' book was all about how the sociological imagination could help society, but it wasn't only a theoretical approach.  The Sociological Imagination contained tips for sociologists as well as the general public to help them better contextualize the world!

Avoid Existing Sets of Procedures

So much of sociology was based on existing systems that Mills felt the field focused on method over humanity. To combat this, he suggested that sociologists should function as individuals and propose new theories and methodologies that could challenge and enhance established norms.

Be Clear and Concise

Mills believed that some of the academic language used in the field of sociology encouraged the sense of distance that so troubled him. Instead, he advocated that sociologists be clear and concise when possible, and that they do not couch their theories in language intended to distance themselves from society and from criticism.

Observe the Macro and Micro

Prior to Mills’ work, structural functionalism was the primary philosophy of the field. Mills disagreed with the top-down approach to sociology, and encouraged sociologists to engage with the macro, as they had been doing, in addition to the micro. He believed that history is comprised of both the big and small, and that study of each is required for a robust field.

Observe Social Structure as Well as Milieu

Building off of his last point, Mills also suggested that social structure and individual actions, which he called “milieu,” were interconnected and equally worthy of study. He explained that individual moments, as well as long spans of time, were equally necessary to understanding society.

Avoid Arbitrary Specialization

Mills advocated for a more interdisciplinary approach to sociology. Part of the sociological imagination is thinking outside of the boundaries of yourself; to do so, Mills suggested that sociologists look beyond their specialized fields toward a more comprehensive understanding.

Always Consider Humanity and History

Because so much of sociology in the time of Mills’ writing was concerned with systems, he advocated for more consideration of both humanity and history. That meant looking at human experience on an individual and societal level, as well as within a specific and broad historical context.

Understand Humanity as Historical and Social Actors

Mills wanted sociologists to consider humans as products of society, but also society as products of humanity. According to Mills, people may act on an individual basis, but their individual desires and thoughts are shaped by the society in which they live. Therefore, sociologists should consider human action as a product of not just individual desires, but also historical and social actors.

Consider Individuals in Connection with Social Issues—Public is Personal, Personal is Public

One of Mills’ biggest points was that an individual problem is often also a societal problem. He suggested that sociologists should look beyond the common discourse and find alternate explanations and considerations.

body_shoes-5

 2 In-Depth Sociological Imagination Examples

The sociological imagination can be complex to wrap your mind around, particularly if you’re not already a sociologist. When you take this idea and apply it to a specific example, however, it becomes a lot easier to understand how and why it works to broaden your horizons. As such, we've developed two in-depth sociological imagination examples to help you understand this concept.

Buying a Pair of Shoes

Let’s start with a pretty basic example—buying a pair of shoes. When you think about buying a new pair of shoes, your explanation may be fairly simple, such as that you need a new pair of shoes for a particular purpose, like running or a school dance, or that you simply like the way they look. Both of those things may be true, but using your sociological imagination takes you out of the immediacy of those to answers and encourages you to think deeper.

So let’s go with the first explanation that you need a new pair of running shoes. Our first step toward using the sociological imagination is asking yourself ‘why?’ Well, so you can go running, of course! But why do you want to go running, as opposed to any other form of exercise? Why get into exercise at all? Why new running shoes rather than used ones?

Once you start asking these questions, you can start to see how it’s not just an individual choice on your part —the decision to buy running shoes is a product of the society you live in, your economic situation, your local community, and so on. Maybe you want to go running because you want to get into shape, and your favorite Instagram profile is big into running. Maybe you recently watched a news report about heart health and realized that you need a new exercise regimen to get into shape. And maybe you’ve chosen new shoes over used ones because you have the financial means to purchase a name-brand pair.

If you were a different person in a different context—say if you lived in a poorer area, or an area with more crime, or another country where other forms of exercise are more practical or popular—you might have made different choices. If you lived in a poorer area, designer shoes may not even be available to you. If there was a lot of crime in your area, running might be an unsafe method of exercise. And if you lived in another country, maybe you’d take up biking or tai chi or bossaball.

When you consider these ideas, you can see that while you’re certainly an individual making individual decisions, those decisions are, in part, shaped by the context you live in. That’s using your sociological imagination—you’re seeing how the personal decision of buying a pair of running shoes is also public, in that what is available to you, what societal pressures you experience, and what you feel are all shaped by your surroundings.

Who People Choose to Marry

Marriage for love is the norm in American culture, so we assume that the same is true and always has been true. Why else would anybody marry?

When we use our sociological imaginations, we can figure it out. You might get married to your partner because you love them, but why else might you get married? Well, it can make your taxes simpler, or make you more qualified to get a home loan. If your partner is from another country, it might help them stay within the US. So even in the United States, where marriage is typically thought of as a commitment of love, there are multiple other reasons you might get married.

Throughout history, marriage was a means to make alliances or acquire property, usually with a woman as a bargaining chip. Love wasn’t even part of the equation—in fact, in ancient Rome one politician was ousted from the Senate for having the gall to kiss his wife in public .

It wasn’t until the 17th and 18th centuries that love became a reason to marry, thanks to the Enlightenment idea that lives should be dedicated to pursuing happiness. But at that point, women were still seen more like property than people—it wasn’t until the women’s rights movements of the 1900s that American women advocated for their own equality in marriage.

In other cultures, polygamy might be acceptable, or people might have arranged marriages, where a person’s family chooses their spouse for them. That sounds strange to us, but only because in our culture the norm is marrying for love, with other reasons, such as financial or immigration concerns, being secondary.

So even for an individual, there might be multiple factors at play in the decision to be made. You may never articulate these desires because getting married for love is our cultural norm (and it wouldn’t sound very good in a wedding speech), but these kinds of considerations do have subconscious effects on our decision-making.

body_group-1

Sociological Imagination in the Sociology Community

As you might have gathered from the numerous challenges Mills’ concept of the sociological imagination posed to established practices, he wasn’t a super popular figure in sociology during his time.

Many sociologists were resistant to Mills’ suggested changes to the field. In fact, Mills is sometimes heralded to be ahead of his time , as the values he espoused about human connection and societal issues were prominent thoughts in the 1960s, just after his death. 

One of his former students wrote about how Mills stood in contrast to other sociologists of the era, saying:

“Mills’s very appearance was a subject of controversy. In that era of cautious professors in gray flannel suits he came roaring into Morningside Heights on his BMW motorcycle, wearing plaid shirts, old jeans and work boots, carrying his books in a duffel bag strapped across his broad back. His lectures matched the flamboyance of his personal image, as he managed to make entertaining the heavyweight social theories of Mannheim, Ortega and Weber. He shocked us out of our Silent Generation torpor by pounding his desk and proclaiming that every man should build his own house (as he himself did a few years later) and that, by God, with the proper study, we should each be able to build our own car! “Nowadays men often feel that their private lives are a series of traps,” Mills wrote in the opening sentence of The Sociological Imagination, and I can hear him saying it as he paced in front of the class, speaking not loudly now but with a compelling sense of intrigue, as if he were letting you in on a powerful secret.”

Though Mills’ philosophy is hugely important to today’s sociology field, his skewering of power and the myopic nature of his era’s academics didn’t make him many friends .

However, as time has gone on, the field has come to regard him differently. His challenge to the field helped reshape it into something that is concerned with the macro as well as the micro. Conversations—even negative ones—about Mills’ proposals helped circulate his ideas, leading to The Sociological Imagination eventually being voted as the second most important sociological text of the 20th century .

body_think-1

How to Apply Sociological Imagination to Your Own Life

The great thing about sociological imagination is that you don’t need to be a trained sociologist to do it. You don’t need a huge vocabulary or a deep understanding of sociological texts—just the willingness to step outside of your own viewpoint and consider the world in context.

This helps you escape your own perspective and think about the world differently. That can mean you’re able to make decisions less tinged with cultural bias—maybe you don’t need those expensive running shoes after all.

To train your sociological imagination, get into the habit of asking questions about behavior that seems “normal” to you. Why do you think it’s normal? Where did you learn it? Are there places it may not be seen as normal?

Consider a relatively common tradition like Christmas, for example. Even if you don’t come from a particularly religious family, you may still celebrate the holiday because it’s common in our society. Why is that? Well, it could be that it’s a tradition. But where did that tradition come from? Probably from your ancestors, who may have been more devout than your current family. You can trace this kind of thinking backward and consider your personal history, your family history, and the surrounding cultural context (not all cultures celebrate Christmas, of course!) to understand how something that feels “normal” got to that state.

But cultural context isn’t the only important part of the sociological imagination—Mills also suggested that sociologists should consider the personal and the public, as well. When you come upon something that seems like a personal issue, think about it in a societal context. Why might that person behave the way that they do? Are there societal causes that might contribute to their situation?

A common example of this is the idea of unemployment. If you are unemployed, you may feel simultaneous feelings of frustration, unease, and even self-loathing. Many people blame themselves for their lack of a job, but there are societal factors at play, too. For example, there may simply be no jobs available nearby, particularly if you’re trained in a specific field or need to hit a certain income level to care for your family. You may have been laid off due to poor profits, or even because you live in a place where it’s legal to terminate employment based on sexuality or gender identity. You may be unable to find work because you’re spending so much time caring for your family that you simply don’t have time to apply for many jobs.

So while unemployment may seem like a personal issue, there are actually lots of societal issues that can contribute to it. Mills’ philosophy asks us to consider both in conversation with one another—it’s not that individuals have no free will, but rather that each person is a product of their society as well as an individual.

What’s Next?

Psychology, like sociology, can give us insight into human behavior. If you're thinking of studying psychology in the future, this list of psychology master's programs can give you a great look at which colleges have the best programs!

Sociology can even help you understand works of literature, like The Great Gatsby ! Learn more about F. Scott Fitzgerald's take on the American Dream from our guide.

A good understanding of history is one of the core pieces to a good sociological imagination. To improve your historical knowledge, consider these high school history classes you should take !

author image

Melissa Brinks graduated from the University of Washington in 2014 with a Bachelor's in English with a creative writing emphasis. She has spent several years tutoring K-12 students in many subjects, including in SAT prep, to help them prepare for their college education.

Student and Parent Forum

Our new student and parent forum, at ExpertHub.PrepScholar.com , allow you to interact with your peers and the PrepScholar staff. See how other students and parents are navigating high school, college, and the college admissions process. Ask questions; get answers.

Join the Conversation

Ask a Question Below

Have any questions about this article or other topics? Ask below and we'll reply!

Improve With Our Famous Guides

  • For All Students

The 5 Strategies You Must Be Using to Improve 160+ SAT Points

How to Get a Perfect 1600, by a Perfect Scorer

Series: How to Get 800 on Each SAT Section:

Score 800 on SAT Math

Score 800 on SAT Reading

Score 800 on SAT Writing

Series: How to Get to 600 on Each SAT Section:

Score 600 on SAT Math

Score 600 on SAT Reading

Score 600 on SAT Writing

Free Complete Official SAT Practice Tests

What SAT Target Score Should You Be Aiming For?

15 Strategies to Improve Your SAT Essay

The 5 Strategies You Must Be Using to Improve 4+ ACT Points

How to Get a Perfect 36 ACT, by a Perfect Scorer

Series: How to Get 36 on Each ACT Section:

36 on ACT English

36 on ACT Math

36 on ACT Reading

36 on ACT Science

Series: How to Get to 24 on Each ACT Section:

24 on ACT English

24 on ACT Math

24 on ACT Reading

24 on ACT Science

What ACT target score should you be aiming for?

ACT Vocabulary You Must Know

ACT Writing: 15 Tips to Raise Your Essay Score

How to Get Into Harvard and the Ivy League

How to Get a Perfect 4.0 GPA

How to Write an Amazing College Essay

What Exactly Are Colleges Looking For?

Is the ACT easier than the SAT? A Comprehensive Guide

Should you retake your SAT or ACT?

When should you take the SAT or ACT?

Stay Informed

sociological imagination essay question

Get the latest articles and test prep tips!

Looking for Graduate School Test Prep?

Check out our top-rated graduate blogs here:

GRE Online Prep Blog

GMAT Online Prep Blog

TOEFL Online Prep Blog

Holly R. "I am absolutely overjoyed and cannot thank you enough for helping me!”

Module 1: Foundations of Sociology

The sociological imagination, learning outcomes.

  • Define the sociological imagination
  • Apply the sociological imagination

A person standing on a dot in the center of a wheel, with lines connecting him to nine other people, each standing on their own colored dots.

Figure 1.  The sociological imagination enables you to look at your life and your own personal issues and relate them to other people, history, or societal structures.

Many people believe they understand the world and the events taking place within it, even though they have not actually engaged in a systematic attempt to understanding the social world, as sociologists do. In this section, you’ll learn to think like a sociologist.

The sociological imagination , a concept established by C. Wright Mills (1916-1962) provides a framework for understanding our social world that far surpasses any common sense notion we might derive from our limited social experiences. Mills was a contemporary sociologist who brought tremendous insight into the daily lives of society’s members. Mills stated: “Neither the life of an individual nor the history of a society can be understood without understanding both” [1] .  The sociological imagination is making the connection between personal challenges and larger social issues. Mills identified “troubles” (personal challenges) and “issues” (larger social challenges), also known as biography, and history, respectively. Mills’ sociological imagination allows individuals to see the relationships between events in their personal lives (biography), and events in their society (history). In other words, this mindset provides the ability for individuals to realize the relationship between their personal experiences and the larger society in which they live their lives.

Personal troubles are private problems experienced within the character of the individual and the range of their immediate relation to others. Mills identified that we function in our personal lives as actors and actresses who make choices about our friends, family, groups, work, school, and other issues within our control. We have a degree of influence on the outcome of matters within this personal level. A college student who parties 4 nights out of 7, who rarely attends class, and who never does his homework has a personal trouble that interferes with his odds of success in college. However, when 50% of all college students in the United States never graduate, we label it as a larger social issue.

Larger social or public issues are those that lie beyond one’s personal control and the range of one’s inner life. These pertain to broader matters of organization and process, which are rooted in society rather than in the individual. Nationwide, students come to college as freshmen who are often ill-prepared to understand the rigors of college life. They haven’t often been challenged enough in high school to make the necessary adjustments required to succeed in college. Nationwide, the average teenager text messages, surfs the Net, plays video games, watches TV, spends hours each day with friends, and works at least part-time. Where and when would he or she get experience focusing attention on college studies and the rigorous self-discipline required to transition into college?

The real power of the sociological imagination is found in how we learn to distinguish between the personal and social levels in our own lives. This includes economic challenges. For example, many students do not purchase required textbooks for college classes at both 2-year colleges and 4-year colleges and universities. Many students simply do not have the money to purchase textbooks, and while this can seem like a “choice,” some of the related social issues include rising tuition rates, decreasing financial aid, increasing costs of living and decreasing wages. The Open Educational Resource (OER) movement has sought to address this  personal trouble  as a  public issue  by partnering with institutional consortia and encouraging large city and state institutions to adopt OER materials. A student who does not purchase the assigned textbook might see this as a private problem, but this student is part of a growing number of college students who are forced to make financial decisions based on structural circumstances.

A majority of personal problems are not experienced as exclusively personal issues, but are influenced and affected by social norms, habits, and expectations. Consider issues like homelessness, crime, divorce, and access to healthcare. Are these all caused by personal choices, or by societal problems? Using the sociological imagination, we can view these issues as interconnected personal and public concerns.

For example, homelessness may be blamed on the individuals who are living on the streets. Perhaps their personal choices influenced their position; some would say they are lazy, unmotivated, or uneducated. This approach of blaming the victim fails to account for the societal factors that also lead to homelessness—what types of social obstacles and social failings might push someone towards homelessness? Bad schools, high unemployment, high housing costs, and little family support are all social issues that could contribute to homelessness. C. Wright Mills, who originated the concept of the sociological imagination, explained it this way: “the very structure of opportunities has collapsed. Both the correct statement of the problem and the range of possible solutions require us to consider the economic and political institutions of the society, and not merely the personal situation and character of a scatter of individuals.”

Watch the following video to see an example of how the sociological imagination is used to understand the issue of obesity.

Contribute!

Improve this page Learn More

  • Mills, C. W.: 1959, The Sociological Imagination, Oxford University Press, London. ↵
  • Modification, adaptation, and original content. Authored by : Sarah Hoiland for Lumen Learning. Provided by : Lumen Learning. License : CC BY: Attribution
  • The Sociological Imagination. Provided by : College of the Canyons. Located at : https://www.canyons.edu/Offices/DistanceLearning/OER/Documents/Open%20Textbooks%20At%20COC/Sociology/SOCI%20101/The%20Sociological%20Imagination.pdf . Project : Sociology 101. License : CC BY: Attribution
  • People graphic. Authored by : Peggy_Marco. Provided by : pixabay. Located at : https://pixabay.com/en/network-society-social-community-1019778/ . License : CC0: No Rights Reserved

Footer Logo Lumen Waymaker

What Is Sociological Imagination: Definition & Examples

Charlotte Nickerson

Research Assistant at Harvard University

Undergraduate at Harvard University

Charlotte Nickerson is a student at Harvard University obsessed with the intersection of mental health, productivity, and design.

Learn about our Editorial Process

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

  • The term sociological imagination describes the type of insight offered by sociology; connecting the problems of individuals to that of broader society.
  • C. Wright Mills, the originator of the term, contended that both sociologists and non-academics can develop a deep understanding of how the events of their own lives (their biography) relate to the history of their society. He outlined a list of methods through which both groups could do so.
  • Mills believed that American society suffered from the fundamental problems of alienation, moral insensibility, threats to democracy, threats to human freedom, and conflict between bureaucratic rationality and human reason, and that the development of the sociological imagination could counter these.

What is Sociological Imagination?

Sociological imagination, an idea that first emerged in C. Wright Mills’ book of the same name, is the ability to connect one’s personal challenges to larger social issues.

The sociological imagination is the ability to link the experience of individuals to the social processes and structures of the wider world.

It is this ability to examine the ways that individuals construct the social world and how the social world and how the social world impinges on the lives of individuals, which is the heart of the sociological enterprise.

This ability can be thought of as a framework for understanding social reality, and describes how sociology is relevant not just to sociologists, but to those seeking to understand and build empathy for the conditions of daily life.

When the sociological imagination is underdeveloped or absent in large groups of individuals for any number of reasons, Mills believed that fundamental social issues resulted.

Sociological Imagination Theory

C. Wright Mills established the concept of sociological imagination in the 20th century.

Mills believed that: “Neither the life of an individual nor the history of a society can be understood without understanding both” the daily lives of society’s members and the history of a society and its issues.

He referred to the problems that occur in everyday life, or biography, as troubles and the problems that occur in society, or history, as issues.

Mills ultimately created a framework intended to help individuals realize the relationship between personal experiences and greater society (Elwell, 2002).

Before Mill, sociologists tended to focus on understanding how sociological systems worked, rather than exploring individual issues. Mills, however, pointed out that these sociologists, functionalists chief among them, ignored the role of the individual within these systems.

In essence, Mills claimed in his book, The Sociological Imagination , that research had come to be guided more by the requirements of administrative concerns than by intellectual ones.

He critiqued sociology for focusing on accumulating facts that only served to facilitate the administrative decisions of, for example, governments.

Mills believed that, to truly fulfill the promise of social science, sociologists and laypeople alike had to focus on substantial, society-wide problems, and relate those problems to the structural and historical features of the society and culture that they navigated (Elwell, 2002).

Mills’ Guidelines for Social Scientists

In the appendix of The Sociological Imagination, Mills set forth several guidelines that would lead to “intellectual craftsmanship.” These are, paraphrased (Mills, 2000; Ellwell, 2002):

Scholars should not split work from life, because both work and life are in unity.

Scholars should keep a file, or a collection, of their own personal, professional, and intellectual experiences.

Scholars should engage in a continual review of their thoughts and experiences.

Scholars may find a truly bad sociological book to be as intellectually stimulating and conducive to thinking as a good one.

Scholars must have an attitude of playfulness toward phrases, words, and ideas, as well as a fierce drive to make sense of the world.

The sociological imagination is stimulated when someone assumes a willingness to view the world from the perspective of others.

Sociological investigators should not be afraid, in the preliminary and speculative stages of their research, to think in terms of imaginative extremes, and,

Scholars should not hesitate to express ideas in language that is as simple and direct as possible. Ideas are affected by how they are expressed. When sociological perspectives are expressed in deadening language, they create a deadened sociological imagination.

Mills’ Original Social Problems

Mills identified five main social problems in American society: alienation , moral insensibility, threats to democracy, threats to human freedom, and the conflict between bureaucratic rationality and human reason (Elwell, 2015).

1. Threats to Democracy and Freedom

The end result of these problems of alienation, political indifference, and the economic and political concentration of power, according to Mills, is a serious threat to democracy and freedom.

He believed that, as bureaucratic organizations became large and more centralized, more and more power would be placed into the hands of a small elite (Elwell, 2006).

2. Alienation

Mills believed that alienation is deeply rooted in how work itself works in society; however, unlike Marx, C. Wright Mills does not attribute alienation solely to the means of production, but to the modern division of labor .

Mills observed that, on the whole, jobs are broken up into simple, functional tasks with strict standards. Machines or unskilled workers take over the most tedious tasks (Elwell, 2002).

As the office was automated, Mills argued, authority and job autonomy became the attributes of only those highest in the work hierarchy. Most workers are discouraged from using their own judgment, and their decision-making forces them to comply with the strict rules handed down by others.

In this loss of autonomy, the average worker becomes alienated from their intellectual capacities and work becomes an enforced chore (Elwell, 2015).

3. Moral Insensibility

The second major problem that C. Wright Mills identified in modern American society was that of moral insensibility. He pointed out that, as people had lost faith in their leaders in government, religion, and the workplace, they became apathetic.

He considered this apathy a “spiritual condition” that underlined many problems — namely, moral insensibility. As a result of moral insensibility, people within society accept atrocities, such as genocide, committed by their leaders.

Mills considered the source of cruelty to be moral insensibility and, ultimately, the underdevelopment of the sociological imagination (Elwell, 2002).

4. Personal Troubles

Personal troubles are the issues that people experience within their own character, and in their immediate relationships with others. Mills believed that people function in their personal lives as actors and actresses who make choices about friends, family, groups, work, school, and other issues within their control.

As a result, people have some issue on the outcomes of events on a personal level. For example, an individual employee who spends most of his work time browsing social media or online shopping may lose their job. This is a personal problem.

However, hundreds of thousands of employees being laid-off en masse constitutes a larger social issue (Mills, 2000).

5. Social and Public Issues

Social and public issues, meanwhile, are beyond one”s personal control. These issues pertain to the organization and processes of society, rather than individuals. For example, universities may, as a whole, overcharge students for their education.

This may be the result of decades of competition and investment into each school”s administration and facilities, as well as the narrowing opportunities for those without a college degree.

In this situation, it becomes impossible for large segments of the population to get a tertiary education without accruing large and often debilitating amounts of debt (Mills, 2000).

The sociological imagination allows sociologists to distinguish between the personal and sociological aspects of problems in the lives of everyone.

Most personal problems are not exclusively personal issues; instead, they are influenced and affected by a variety of social norms, habits, and expectations. Indeed, there is often confusion as to what differentiates personal problems and social issues (Hironimus-Wendt & Wallace, 2009).

For example, a heroin addiction may be blamed on the reckless and impulsive choices of an addict. However, this approach fails to account for the societal factors and history that led to high rates of heroin addiction, such as the over-prescribing of opiate painkillers by doctors and the dysregulation of pharmaceutical companies in the United States.

Sociological imagination is useful for both sociologists and those encountering problems in their everyday lives. When people lack in sociological imagination, they become vulnerable to apathy: considering the beliefs, actions, and traditions around them to be natural and unavoidable.

This can cause moral insensitivity and ultimately the commitment of cruel and unjust acts by those guided not by their own consciousness, but the commands of an external body (Hironimus-Wendt & Wallace, 2009).

Fast Fashion

Say that someone is buying themselves a new shirt. Usually, the person buying the shirt would be concerned about their need for new clothing and factors such as the price, fabric, color, and cut of the shirt.

At a deeper level, the personal problem of buying a shirt may provoke someone to ask themselves what they are buying the shirt for, where they would wear it, and why they would participate in an activity where they would wear the shirt over instead of some other activity.

People answer these questions on a personal level through considering a number of different factors. For example, someone may think about how much they make, and how much they can budget for clothing, the stores available in the community, and the styles popular in one”s area (Joy et al., 2012).

On a larger level, however, the questions and answers to the question of what shirt to buy — or even if to buy a shirt at all — would differ if someone were provided a different context and circumstances.

For example, if someone had come into a sudden sum of wealth, they may choose to buy an expensive designer shirt or quit the job that required them to buy the shirt altogether. If someone had lived in a community with many consignment shops, they may be less likely to buy a new shirt and more likely to buy one that was pre-owned.

If there were a cultural dictate that required people to, say, cover their shoulders or breasts — or the opposite, someone may buy a more or less revealing shirt.

On an even higher level, buying a shirt also represents an opportunity to connect the consumption habits of individuals and groups to larger issues.

The lack of proximity of communities to used-clothing stores on a massive scale may encourage excessive consumption, leading to environmental waste in pollution. The competition between retailers to provide the cheapest and most fashionable shirts possible results in, as many have explored, the exploitation of garment workers in exporting countries and large amounts of co2 output due to shipping.

Although an individual can be blamed or not blamed for buying a shirt made more or less sustainably or ethically, a discussion of why an individual bought a certain shirt cannot be complete without a consideration of the larger factors that influence their buying patterns (Joy et al., 2012).

The “Global Economic Crisis”

Dinerstein, Schwartz, and Taylor (2014)  used the 2008 economic crisis as a case study of the concept of sociological imagination, and how sociology and other social sciences had failed to adequately understand the crisis.

The 2008 global economic crisis led to millions of people around the world losing their jobs. On the smallest level, individuals were unable to sustain their lifestyles.

Someone who was laid off due to the economic downturn may have become unable to make their mortgage or car payments, leading to a bank foreclosing their house or repossessing their car.

This person may also be unable to afford groceries, need to turn to a food bank, or have credit card debt to feed themselves and their families. As a result, this person may damage their credit score, restricting them from, say, taking out a home ownership loan in the future.

The sociological imagination also examines issues like the great recession at a level beyond these personal problems. For example, a sociologist may look at how the crisis resulted from the accessibility of and increasing pressure to buy large and normally unaffordable homes in the United States.

Some sociologists, Dinerstein, Schwartz, and Taylor among them, even looked at the economic crisis as unveiling the social issue of how academics do sociology. For example, Dinerstein, Schwatz, and Taylor point out that the lived experience of the global economic crisis operated under gendered and racialized dynamics.

Many female immigrant domestic laborers, for example, lost their jobs in Europe and North America as a result of the crisis.

While the things that sociologists had been studying about these populations up until that point — migration and return — are significant, the crisis brought a renewed focus in sociology into investigating how the negative effects of neoliberal globalization and the multiple crises already impacting residents of the global South compound during recessions (Spitzer & Piper, 2014).

Bhambra, G. (2007).  Rethinking modernity: Postcolonialism and the sociological imagination . Springer.

Dinerstein, A. C., Schwartz, G., & Taylor, G. (2014). Sociological imagination as social critique: Interrogating the ‘global economic crisis’. Sociology, 48 (5), 859-868.

Elwell, F. W. (2002). The Sociology of C. Wright Mills .

Elwell, F. W. (2015). Macrosociology: four modern theorists . Routledge.

Hironimus-Wendt, R. J., & Wallace, L. E. (2009). The sociological imagination and social responsibility. Teaching Sociology, 37 (1), 76-88.

Joy, A., Sherry Jr, J. F., Venkatesh, A., Wang, J., & Chan, R. (2012). Fast fashion, sustainability, and the ethical appeal of luxury brands. Fashion theory, 16 (3), 273-295.

Mills, C. W. (2000). The sociological imagination . Oxford University Press.

Spitzer, D. L., & Piper, N. (2014). Retrenched and returned: Filipino migrant workers during times of crisis. Sociology, 48 (5), 1007-1023.

Print Friendly, PDF & Email

The Sociological Imagination

Guide cover image

35 pages • 1 hour read

A modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.

Chapter Summaries & Analyses

Chapters 1-3

Chapters 4-6

Chapters 7-10

Key Figures

Symbols & Motifs

Important Quotes

Essay Topics

Discussion Questions

What is the “sociological imagination” according to Mills, and why is it necessary?

What are some of the key features of the sociological school Mills calls abstracted empiricism? Why does Mills believe this methodology to be limiting in its epistemic reach?

What are some of the key features of what Mills calls “liberal practicality,” and why does Mills believe this approach hinders the task of the sociologist?

blurred text

Don't Miss Out!

Access Study Guide Now

Related Titles

By C. Wright Mills

Guide cover image

The Power Elite

C. Wright Mills

Featured Collections

View Collection

Sociological Imagination: Sociology Issues Essay

The COVID-19 pandemic has forced significant societal changes all over the world. The introduction of social distancing, face mask wearing, and economic downturn have led people to alter their lifestyles considerably. This paper aims to apply sociological imagination to COVID-19 to analyze how it has affected the lives of individuals and society as a whole. The paper will outline possible changes in social structures and social forces in the GCC region, which may happen as a result of the pandemic. I will also explain how these changes will affect my community and family.

The Definition of Sociological Imagination

Sociological imagination is a way to see the events of one’s own life in a broader context of social issues and trends. The term was coined by C. Wright Mills, who argued that “neither the life of an individual nor the history of a society can be understood without understanding both” (Smith-Hawkins, 2020, p. 8). Sociological imagination is defined as an awareness of the connection that exists between one’s behavior and experiences and the surrounding society that has shaped the individual’s choices and worldview (Griffiths et al., 2015). By applying sociological imagination to everyday life, people can see that their actions are largely influenced by the prevalent societal trends and practices.

Moreover, sociological imagination can show that the decisions people deem their own are actually made with the involvement of their families and communities. One may consider, for example, the decision to have children. In the past, having children was an indispensable part of people’s family lives. Nowadays, people have gained more freedom in deciding whether to have children. However, the eventual decision to reproduce is taken with regard to the culture in which the person lives. For example, in child-centric societies, people are less likely to remain childless because of the pressure they experience from their peers, parents, and the entire community. In Western countries, where the individualistic culture prevails, people do not experience such societal pressure in terms of having children, but they feel urged to boost their personal achievements. As a result, guided by these societal trends, they decide to postpone having children in order to build a career.

Although the term “sociological imagination” was invented by C. Wright Mills, the idea of integrating the lives of individuals and entire societies was used by earlier sociologists. For example, Karl Marx used sociological imagination to explain the process of social change (Griffiths et al., 2015). Marx argued that the social conflict between workers and capitalists would lead to tensions and revolts, which, subsequently, would end in a social change (Griffiths et al., 2015). Max Weber also applied his sociological imagination to understand society and argued that standard scientific methods were not applicable for predicting the behavior of human groups (Griffiths et al., 2015). Weber believed that sociology should take account of culture and get a deep understanding of different social groups rather than strive to obtain generalizable results (Griffiths et al., 2015). Thus, the concept of sociological imagination is essential in sociologists and has been used by scientists even before C. Wright Mills described and coined a term for it.

Possible Changes in Social Structures and Forces in a Post-COVID World

In a post-COVID world, many social structures are likely to change. According to Smith-Hawkins (2020), social structures are “any relatively stable pattern of social behavior found in social institutions” (p. 6). For example, one common social structure is status, which refers to the responsibilities and benefits that people exercise depending on their roles in society (Smith-Hawkins, 2020). In a post-COVID world, some people are likely to experience a change in their status. For example, the pandemic led many entrepreneurs to close their businesses because of the forced lockdown. As a result, these people are likely to lose their status as business owners and will have to find a new occupation. In addition, during the pandemic, the status of healthcare workers has significantly improved, which will probably influence the prestige and attractiveness of healthcare professions for individuals.

Another important social structure is formal organizations, such as banks, schools, hospitals, and others. Within these social structures, the changes include the emergence of new rules, such as face mask wearing, and the modification of the work format. During the pandemic, many organizations have transferred to remote work in response to the introduction of social distancing or were forced to lay off a large number of workers. As a result, individuals had to adapt to new circumstances. In the future, it is possible that the jobs that allow for the remote work format will become more valuable, along with various delivery services. In addition, these changes are likely to change people’s career choices in the future.

Social institutions are also part of social structures, and one important social institution that is likely to change in a post-COVID world is health and medicine. One possible change that healthcare in GCC will undergo is an increase in the use of telehealth. Social distancing, the contagiousness of the virus, and low access to care in rural areas are significant preconditions for the wide use of remote healthcare services.

Finally, in terms of social forces, it is likely that a social action directed toward improving economic policies will emerge. COVID-19 has sharpened social issues that have existed long ago in society, such as poverty and inequality. Many people have become unemployed or experienced a decrease in their incomes. These changes may lead to public discontent, forcing governments to revise their policies related to labor and the economy.

The Impact of Social Changes on the Community and Family

According to the concept of sociological imagination, individuals and society are closely interrelated, and individuals are highly influenced by changes occurring in society. Therefore, one can assume that the changes that will happen in a post-COVID world will influence communities and individuals with their families. Thinking of my community, I believe that healthcare workers will be respected even more than before for their contribution to the fight against the virus. I also think that many people in my community will experience a change in their status. Entrepreneurs who lost their businesses will have to change their social roles; many office workers will change their status to either unemployed or remote employees. As for the influence on my family, my relatives and I will have to adapt to the new economic environment and learn to function effectively under the circumstances of social distancing and remote work. Finally, if my assumptions about the social change in healthcare and policies related to labor and economy are right, both my community and family will benefit in terms of improved access to healthcare and labor conditions.

Griffiths, H., Keirns, N. J., Strayer, E., Cody-Rydzewski, S., Scaramuzzo, G., Sadler, T., Vyain, S., Bry, J., & Jones, F. (2015). Introduction to sociology (2 nd ed.). OpenStax College, Rice University. Web.

Smith-Hawkins, P. (Ed.). (2020). Introduction to Sociology (AUBH Bahraini ed.). Unpublished manuscript.

  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2023, November 1). Sociological Imagination: Sociology Issues. https://ivypanda.com/essays/sociological-imagination-sociology-issues/

"Sociological Imagination: Sociology Issues." IvyPanda , 1 Nov. 2023, ivypanda.com/essays/sociological-imagination-sociology-issues/.

IvyPanda . (2023) 'Sociological Imagination: Sociology Issues'. 1 November.

IvyPanda . 2023. "Sociological Imagination: Sociology Issues." November 1, 2023. https://ivypanda.com/essays/sociological-imagination-sociology-issues/.

1. IvyPanda . "Sociological Imagination: Sociology Issues." November 1, 2023. https://ivypanda.com/essays/sociological-imagination-sociology-issues/.

Bibliography

IvyPanda . "Sociological Imagination: Sociology Issues." November 1, 2023. https://ivypanda.com/essays/sociological-imagination-sociology-issues/.

  • Improving the Resilience of SMEs in a Post-Covid World
  • Toyota's Operations in the Post-Covid World
  • K-12 Learning and Technology in a Post-COVID-19 Era
  • “The Changing Landscape of International Business Post-COVID-19” by Pisani
  • Graduate Employability in Post-Covid Labor Market
  • Remote Entrepreneurship in the Post-COVID-19 Period
  • Construction Industry Post-COVID-19 Challenges
  • Post-COVID-19 Pandemic Policy Changes
  • Reflection on the Post-COVID Education Articles
  • The Impact of the COVID-19 Pandemic on the Future of Europe
  • Restaurant Business During COVID-19 Pandemic
  • Religious Persecution in Sociology
  • Aspects of Straw Man Fallacy
  • Noise Pollution: Effects, Causes, and Potential Solutions
  • Obesity and Its Demographic Predictors

sociological imagination essay question

CBSE Class 12 Sociology Answer Key 2024 and Question Papers, Download PDF All SETs

img src="https://img.jagranjosh.com/images/2024/April/142024/image-(17).jpg" width="1200" height="675" />

CBSE Class 12 Sociology Answer Key 2024: CBSE Class 12 Sociology Board Exam 2024 has been successfully conducted today on April 1, 2024 from 10:30 AM to 1:30 PM. The aspirants might be in a dilemma of whether they have answered the questions correctly or not. To avoid confusion, students can check the CBSE Class 12 Sociology Answer key 2024 here. As a part of the answer key, students can get the complete answers to multiple sets of CBSE 12th Sociology paper 2024. Also, find PDF download links of the same. Here, we will try to provide you all with as many sets of answer key as possible. Students are advised to check the question paper code and set numbers of each answer key before checking their answers. 

We would also like to inform you that the answer key provided here is the unofficial answer key and the answers are tentative in nature. Hence, students should not completely rely on the answers provided here. Instead, they should wait for their final results. By that time, students can also get their answers verified by their school teachers for better reference and guidance. 

CBSE Class 12 Sociology Exam 2024 Key Highlights

Find key important points related to CBSE Class 12 Sociology Paper 2024. 

CBSE Class 12 Sociology Paper Answer Key 2024

The complete answers of the CBSE Class 12 Sociology Paper 2024 are provided below. Check the CBSE 12th Sociology answer key of multiple sets along with PDF download links of the same. Also, check the question paper codes and set numbers for each of the answer keys before checking your answers. 

Question Paper Code: 62

Answers 

1.  Assertion (A): What marked capitalism from the very beginning was its dynamism, its potential to grow, expand, innovate, and use technology and labour in the best possible way.

Reason (R): Capitalism is an economic system organised to accumulate profits within a market system.

(A) Both Assertion (A) and Reason (R) are true and Reason (R) is the correct explanation of Assertion (A).

(B) Both Assertion (A) and Reason (R) are true, but Reason (R) is not the correct explanation of Assertion (A).

(C) Assertion (A) is true, but Reason (R) is false.

(D) Assertion (A) is false, but Reason (R) is true.

Answer. (B) Both Assertion (A) and Reason (R) are true, but Reason (R) is not the correct explanation of Assertion (A).

2. The impact of Sanskritisation is many sided. Its influence can be seen in:

(A) Language only

(B) Literature only

(C) Drama only

(D) Language, Literature, Drama

Answer. (D) Language, Literature, Drama

3. Which of the following statements is not true for Green Revolution? 

(A) Green Revolution was a government programme of agricultural modernisation.

(B) It was largely funded by international agencies.

(C) The first wave of the Green Revolution package was received by Bihar, Eastern Uttar Pradesh and Telangana.

(D) Green Revolution was targeted mainly at the wheat and rice growing areas.

Answer. (C) The first wave of the Green Revolution package was received by Bihar, Eastern Uttar Pradesh and Telangana.

4. Which of the following is/are the characteristics of Ecological movements?

I. Identity politics

II. Greater exploitation of natural resources

III. Cultural anxieties IV. Social inequality

(A) Only I and II

(B) Only II

(C) Only III

(D) I, II, III and IV

Answers. (B) Only II

5.  Due to COVID-19 hundreds and thousands of workers worked from home. Which of the following can allow work from home

I. IT sector

II. Bidi Industry

III. Maruti factory

IV. All Government firms

(A) I and II

(B) I and IV

(C) II and III

Answer. (B) I and IV

6. Historically, all over the world, it has been found that there are slightly more females than males in most countries. Which of the following factors made this possible?

I. Women tend to outlive men at the other end of the life cycle.

II. Girl babies are resistant to boy babies in infancy.

III. Gender-based families with preference for sons.

IV. Gender neutral behaviour.

(A) I, II and III

(B) I and III

(C) III and IV

(D) I and II

Answer. (D) I and II

8.  State action alone cannot ensure social change. What else does it need to be supplemented with to ensure social change?

(A) Civil society organisations only

(B) Contributions to literature only 

(C) Mass media only

(D) Civil society organisation, Contributions to literature, Mass Media

Answer. (D) Civil society organisations, Contributions to literature, Mass Media

9. Assertion (A): Diversity emphasises differences rather than inequalities.

Reason (R): Cultural diversity can present tough challenges.

10.  Sometimes cities may also be preferred by people for social reasons. Which of the following is not a reason?

(A) Cities offer relative anonymity.

(B) Urban life involves interaction with strangers.

(C) Continuous decline in common property resources like ponds, forests and grazing lands.

(D) The poorer sections of the socially dominant rural groups do not engage in low status work in cities.

Answer. (B) Urban life involves interaction with strangers

11.  Which of the following do not belong to each other? 

(A) Yadavs of Bihar and Uttar Pradesh

(B) Vokkaligas of Karnataka

(C)Jats of Punjab

(D) Khammas of Tamil Nadu

Answer. (D) Khammas of Tamil Nadu

12.  Assertion (A): Prejudices refer to pre-conceived opinions or attitudes held by members of one group towards another.

Reason (R): An opinion is formed in advance of any familiarity with the subject, before considering any available evidence. 

14 The policy of liberalisation entails the participation in the _____ which aims to bring about a more free international trading system.

(B) EPC 

Answer. (A) WTO

Keep tuning in for more answers....To be updated soon

CBSE Class 12 Sociology Marking Scheme 2024

The CBSE Class 12 Sociology marking scheme 2024 has been provided below. Check the CBSE 12th Sociology paper pattern followed in the exam today.

  • The paper was of 80 marks 
  • Students had to complete the paper in 3 hours
  • There were total 38 questions
  • The paper was segregated into fours sections, A, B, C, and D
  • The first section consists of 20 Multiple Choice Questions of 1 mark each
  • The second section consists of 9 very short answer type questions of 2 marks each
  • The third section consists of 5 questions of 4 marks each
  • The last/fourth section consists of 3 questions of 6 marks each

CBSE Class 12 Sociology Question Paper 2024

All those who wish to check the CBSE Class 12 Sociology Question Paper 2024 and multiple sets of the same along with PDF download links, can check the article link attached above. 

CBSE Class 12 Sociology Paper Analysis 2024

Check the CBSE Class 12 Sociology Paper Analysis 2024 here and get to know in detail about the question paper, types of questions asked, paper format followed, and a lot more.

CBSE Class 12 Result Date 2024

Also Check: 

CBSE Class 12 Syllabus 2023-2024 (All Subjects)

CBSE Class 12 Sample Paper 2023-2024 (All Subjects)

CBSE Class 12 Practice Papers 2023-2024

NCERT Solutions for Class 12 (All Subjects and Chapters)

Important Questions for Class 12 Board Exam 2024 (All Subjects)

CBSE Class 12 Study Material 2024

  • Share full article

For more audio journalism and storytelling, download New York Times Audio , a new iOS app available for news subscribers.

Supported by

The Ezra Klein Show

Transcript: Ezra Klein Interviews Ethan Mollick

Every Tuesday and Friday, Ezra Klein invites you into a conversation about something that matters, like today’s episode with Ethan Mollick. Listen wherever you get your podcasts .

Transcripts of our episodes are made available as soon as possible. They are not fully edited for grammar or spelling.

The Ezra Klein Show Poster

How Should I Be Using A.I. Right Now?

Give your a.i. a personality, spend 10 hours experimenting, and other practical tips from ethan mollick..

[MUSIC PLAYING]

From New York Times Opinion, this is “The Ezra Klein Show.”

This feels wrong to me. But I have checked the dates. It was barely more than a year ago that I wrote this piece about A.I., with the title “This Changes Everything.” I ended up reading it on the show, too. And the piece was about the speed with which A.I. systems were improving. It argued that we can usually trust that tomorrow is going to be roughly like today, that next year is going to be roughly like this year. That’s not what we’re seeing here. These systems are growing in power and capabilities at an astonishing rate.

The growth is exponential, not linear. When you look at surveys of A.I. researchers, their timeline for how quickly A.I. is going to be able to do basically anything a human does better and more cheaply than a human — that timeline is accelerating, year by year, on these surveys. When I do my own reporting, talking to the people inside these companies, people at this strange intersection of excited and terrified of what they’re building, no one tells me they are seeing a reason to believe progress is going to slow down.

And you might think that’s just hype, but a lot of them want it to slow down. A lot of them are scared of how quickly it is moving. They don’t think that society is ready for it, that regulation is ready for it. They think the competitive pressures between the companies and the countries are dangerous. They wish something would happen to make it all go slower. But what they are seeing is they are hitting the milestones faster, that we’re getting closer and closer to truly transformational A.I., that there is so much money and talent and attention flooding into the space that that is becoming its own accelerant. They are scared. We should at least be paying attention.

And yet, I find living in this moment really weird, because as much as I know this wildly powerful technology is emerging beneath my fingertips, as much as I believe it’s going to change the world I live in profoundly, I find it really hard to just fit it into my own day to day work. I consistently sort of wander up to the A.I., ask it a question, find myself somewhat impressed or unimpressed at the answer. But it doesn’t stick for me. It is not a sticky habit. It’s true for a lot of people I know.

And I think that failure matters. I think getting good at working with A.I. is going to be an important skill in the next few years. I think having an intuition for how these systems work is going to be important just for understanding what is happening to society. And you can’t do that if you don’t get over this hump in the learning curve, if you don’t get over this part where it’s not really clear how to make A.I. part of your life.

So I’ve been on a personal quest to get better at this. And in that quest, I have a guide. Ethan Mollick is a professor at the Wharton School of the University of Pennsylvania. He studies and writes about innovation and entrepreneurship. But he has this newsletter, One Useful Thing, that has become, really, I think, the best guide how to begin using, and how to get better at using A.I. He’s also got a new book on the subject, “Co-Intelligence.” And so I asked him on the show to walk me through what he’s learned.

This is going to be, I should say, the first of three shows on this topic. This one is about the present. The next is about some things I’m very worried about in the near future, particularly around what A.I. is going to do to our digital commons. And then, we’re going to have a show that is a little bit more about the curve we are all on about the slightly further future, and the world we might soon be living in.

As always, my email for guest suggestions, thoughts, feedback, [email protected].

Ethan Mollick, welcome to the show.

Thanks for having me.

So let’s assume I’m interested in A.I. And I tried ChatGPT a bunch of times, and I was suitably impressed and weirded out for a minute. And so I know the technology is powerful. I’ve heard all these predictions about how it will take everything over, or become part of everything we do. But I don’t actually see how it fits into my life, really, at all. What am I missing?

So you’re not alone. This is actually very common. And I think part of the reason is that the way ChatGPT works isn’t really set up for you to understand how powerful it is. You really do need to use the paid version, they are significantly smarter. And you can almost think of this — like, GPT-3, which was — nobody really paid attention to when it came out, before ChatGPT, was about as good as a sixth grader at writing. GPT-3.5, the free version of ChatGPT, is about as good as a high school, or maybe even a college freshman or sophomore.

And GPT-4 is often as good as a Ph.D. in some forms of writing. Like, there’s a general smartness that increases. But even more than that, ability seems to increase. And you’re much more likely to get that feeling that you are working with something amazing as a result. And if you don’t work with the frontier models, you can lose track of what these systems can actually do. On top of that, you need to start just using it. You kind of have to push past those first three questions.

My advice is usually bring it to every table that you come to in a legal and ethical way. So I use it for every aspect of my job in ways that I legally and ethically can, and that’s how I learn what it’s good or bad at.

When you say, bring it to every table you’re at, one, that sounds like a big pain, because now I’ve got to add another step of talking to the computer constantly. But two, it’s just not obvious to me what that would look like. So what does it look like? What does it look like for you, or what does it look like for others — that you feel is applicable widely?

So I just finished this book. It’s my third book. I keep writing books, even though I keep forgetting that writing books is really hard. But this was, I think, my best book, but also the most interesting to write. And it was thanks to A.I. And there’s almost no A.I. writing in the book, but I used it continuously. So things that would get in the way of writing — I think I’m a much better writer than A.I. — hopefully, people agree. But there’s a lot of things that get in your way as a writer. So I would get stuck on a sentence. I couldn’t do a transition. Give me 30 versions of this sentence in radically different styles. There’s 200 different citations. I had the A.I. read through the papers that I read through, write notes on them, and organize them for me. I had the A.I. suggest analogies that might be useful. I had the A.I. act as readers, and in different personas, read through the paper from the perspective of, is there some example I could give that’s better? Is this understandable or not? And that’s very typical of the kind of way that I would, say, bring it to the table. Use it for everything, and you’ll find its limits and abilities.

Let me ask you one specific question on that, because I’ve been writing a book. And on some bad days of writing the book, I decided to play around with GPT-4. And of the things that it got me thinking about was the kind of mistake or problem these systems can help you see and the kind they can’t. So they can do a lot of, give me 15 versions of this paragraph, 30 versions of this sentence. And every once in a while, you get a good version or you’ll shake something a little bit loose.

But almost always when I am stuck, the problem is I don’t know what I need to say. Oftentimes, I have structured the chapter wrong. Oftentimes, I’ve simply not done enough work. And one of the difficulties for me about using A.I. is that A.I. never gives me the answer, which is often the true answer — this whole chapter is wrong. It is poorly structured. You have to delete it and start over. It’s not feeling right to you because it is not right.

And I actually worry a little bit about tools that can see one kind of problem and trick you into thinking it’s this easier problem, but make it actually harder for you to see the other kind of problem that maybe if you were just sitting there, banging your head against the wall of your computer, or the wall of your own mind, you would eventually find.

I think that’s a wise point. I think there’s two or three things bundled there. The first of those is A.I. is good, but it’s not as good as you. It is, say, at the 80th percentile of writers based on some results, maybe a little bit higher. In some ways, if it was able to have that burst of insight and to tell you this chapter is wrong, and I’ve thought of a new way of phrasing it, we would be at that sort of mythical AGI level of A.I. as smart as the best human. And it just isn’t yet.

I think the second issue is also quite profound, which is, what does using this tool shape us to do and not do? One nice example that you just gave is writing. And I think a lot of us think about writing as thinking. We don’t know if that’s true for everybody, but for writers, that’s how they think. And sometimes, getting that shortcut could shortcut the thinking process. So I’ve had to change sometimes a little bit how I think when I use A.I., for better or for worse. So I think these are both concerns to be taken seriously.

For most people — right, if you’re just going to pick one model, what would you pick? What do you recommend to people? And second, how do you recommend they access it? Because something going on in the A.I. world is there are a lot of wrappers on these models. So ChatGPT has an app. Claude does not have an app. Obviously, Google has its suite of products. And there are organizations that have created a different spin on somebody else’s A.I. — so Perplexity, which is, I believe, built on GPT-4 now, you can pay for it.

And it’s more like a search engine interface, and has some changes made to it. For a lot of people, the question of how easy and accessible the thing is to access really matters. So which model do you recommend to most people? And which entry door do you recommend to most people? And do they differ?

It’s a really good question. I recommend working with one of the models as directly as possible, through the company that creates them. And there’s a few reasons for that. One is you get as close to the unadulterated personality as possible. And second, that’s where features tend to roll out first. So if you like sort of intellectual challenge, I think Claude 3 is the most intellectual of the models, as you said.

The biggest capability set right now is GPT-4, so if you do any math or coding work, it does coding for you. It has some really interesting interfaces. That’s what I would use — and because GPT-5 is coming out, that’s fairly powerful. And Google is probably the most accessible, and plugged into the Google ecosystem. So I don’t think you can really go wrong with any of these. Generally, I think Claude 3 is the most likely to freak you out right now. And GPT-4 is probably the most likely to be super useful right now.

So you say it takes about 10 hours to learn a model. Ten hours is a long time, actually. What are you doing in that 10 hours? What are you figuring out? How did you come to that number? Give me some texture on your 10 hour rule.

So first off, I want to indicate the 10 hours is as arbitrary as 10,000 steps. Like, there’s no scientific basis for it. This is an observation. But it also does move you past the, I poked at this for an evening, and it moves you towards using this in a serious way. I don’t know if 10 hours is the real limit, but it seems to be somewhat transformative. The key is to use it in an area where you have expertise, so you can understand what it’s good or bad at, learn the shape of its capabilities.

When I taught my students this semester how to use A.I., and we had three classes on that, they learned the theory behind it. But then I gave them an assignment, which was to replace themselves at their next job. And they created amazing tools, things that filed flight plans or did tweeting, or did deal memos. In fact, one of the students created a way of creating user personas, which is something that you do in product development, that’s been used several thousand times in the last couple of weeks in different companies.

So they were able to figure out uses that I never thought of to automate their job and their work because they were asked to do that. So part of taking this seriously in the 10 hours is, you’re going to try and use it for your work. You’ll understand where it’s good or bad, what it can automate, what it can’t, and build from there.

Something that feels to me like a theme of your work is that the way to approach this is not learning a tool. It is building a relationship. Is that fair?

A.I. is built like a tool. It’s software. It’s very clear at this point that it’s an emulation of thought. But because of how it’s built, because of how it’s constructed, it is much more like working with a person than working with a tool. And when we talk about it this way, I almost feel kind of bad, because there’s dangers in building a relationship with a system that is purely artificial, and doesn’t think and have emotions. But honestly, that is the way to go forward. And that is sort of a great sin, anthropomorphization, in the A.I. literature, because it can blind you to the fact that this is software with its own sets of foibles and approaches.

But if you think about it like programming, then you end up in trouble. In fact, there’s some early evidence that programmers are the worst people at using A.I. because it doesn’t work like software. It doesn’t do the things you would expect a tool to do. Tools shouldn’t occasionally give you the wrong answer, shouldn’t give you different answers every time, shouldn’t insult you or try to convince you they love you.

And A.I.s do all of these things. And I find that teachers, managers, even parents, editors, are often better at using these systems, because they’re used to treating this as a person. And they interact with it like a person would, giving feedback. And that helps you. And I think the second piece of that “not tool” piece is that when I talk to OpenAI or Anthropic, they don’t have a hidden instruction manual. There is no list of how you should use this as a writer, or as a marketer, or as an educator. They don’t even know what the capabilities of these systems are. They’re all sort of being discovered together. And that is also not like a tool. It’s more like a person with capabilities that we don’t fully know yet.

So you’ve done this with all the big models. You’ve done, I think, much more than this, actually, with all the big models. And one thing you describe feeling is that they don’t just have slightly different strengths and weaknesses, but they have different — for lack of a better term, and to anthropomorphize — personalities, and that the 10 hours in part is about developing an intuition not just for how they work, but kind of how they are and how they talk, the sort of entity you’re dealing with.

So give me your high level on how GPT-4 and Claude 3 and Google’s Gemini are different. What are their personalities like to you?

It’s important to know the personalities not just as personalities, but because there are tricks. Those are tunable approaches that the system makers decide. So it’s weird to have this — in one hand, don’t anthropomorphize, because you’re being manipulated, because you are. But on the other hand, the only useful way is to anthropomorphize. So keep in mind that you are dealing with the choices of the makers.

So for example, Claude 3 is currently the warmest of the models. And it is the most allowed by its creators, Anthropic, I think, to act like a person. So it’s more willing to give you its personal views, such as they are. And again, those aren’t real views. Those are views to make you happy — than other models. And it’s a beautiful writer, very good at writing, kind of clever — closest to humor, I’ve found, of any of the A.I.s. Less dad jokes and more actual almost jokes.

GPT-4 feels like a workhorse at this point. It is the most neutral of the approaches. It wants to get stuff done for you. And it will happily do that. It doesn’t have a lot of time for chitchat. And then we’ve got Google’s Bard, which feels like — or Gemini now — which feels like it really, really wants to help. We use this for teaching a lot. And we build these scenarios where the A.I. actually acts like a counterparty in a negotiation. So you get to practice the negotiation by negotiating with the A.I. And it works incredibly well. I’ve been building simulations for 10 years, can’t imagine what a leap this has been. But when we try and get Google to do that, it keeps leaping in on the part of the students, to try and correct them and say, no, you didn’t really want to say this. You wanted to say that. And I’ll play out the scenario as if it went better. And it really wants to kind of make things good for you.

So these interactions with the A.I. do feel like you’re working with people, both in skills and in personality.

You were mentioning a minute ago that what the A.I.s do reflect decisions made by their programmers. They reflect guardrails, what they’re going to let the A.I. say. Very famously, Gemini came out and was very woke. You would ask it to show you a picture of soldiers in Nazi Germany, and it would give you a very multicultural group of soldiers, which is not how that army worked. But that was something that they had built in to try to make more inclusive photography generation.

But there are also things that happen in these systems that people don’t expect, that the programmers don’t understand. So I remember the previous generation of Claude, which is from Anthropic, that when it came out, something that the people around it talked about was, for some reason, Claude was just a little bit more literary than the other systems. It was better at rewriting things in the voices of literary figures. It just had a slightly artsier vibe.

And the people who trained it weren’t exactly sure why. Now, that still feels true to me. Right now, of the ones I’m using, I’m spending the most time with Claude 3. I just find it the most congenial. They all have different strengths and weaknesses, but there is a funny dimension to these where they are both reflecting the guardrails and the choices of the programmers. And then deep inside the training data, deep inside the way the various algorithms are combining, there is some set of emergent qualities to them, which gives them this at least edge of chance, of randomness, of something — yeah, that does feel almost like personality.

I think that’s a very important point. And fundamental about A.I. is the idea that we technically know how LLMs work, but we don’t know how they work the way they do, or why they’re as good as they are. They’re really — we don’t understand it. The theories range from everyone — from it’s all fooling us, to they’ve emulated the way humans think because the structure of language is the structure of human thought. So even though they don’t think, they can emulate it. We don’t know the answer.

But you’re right, there’s these emergent sets of personalities and approaches. When I talk to A.I. design companies, they often can’t explain why the A.I. stops refusing answering a particular kind of question. When they tune the A.I. to do something better, like answer a math better, it suddenly does other things differently. It’s almost like adjusting the psychology of a system rather than tuning parameters.

So when I said that Claude is allowed to be more personable, part of that is that the system prompt in Claude, which is the initial instructions it gets, allow it to be more personable than, say, Microsoft’s Copilot, formerly Bing, which has explicit instructions after a fairly famous blow up a while ago, that it’s never supposed to talk about itself as a person or indicate feelings. So there’s some instructions, but that’s on top of these roiling systems that act in ways that even the creators don’t expect.

One thing people know about using these models is that hallucinations, just making stuff up, is a problem. Has that changed at all as we’ve moved from GPT-3.5 to 4, as we move from Claude 2 to 3. Like, has that become significantly better? And if not, how do you evaluate the trustworthiness of what you’re being told?

So those are a couple of overlapping questions. The first of them is, it getting better over time? So there is a paper in the field of medical citations that indicated that around 80 to 90 percent of citations had an error, were made up with GPT-3.5. That’s the free version of Chat. And that drops for GPT-4.

So hallucination rates are dropping over time. But the A.I. still makes stuff up because all the A.I. does is hallucinate. There is no mind there. All it’s doing is producing word after word. They are just making stuff up all the time. The fact that they’re right so often is kind of shocking in a lot of ways.

And the way you avoid hallucination is not easily. So one of the things we document in one of our research papers is we purposely designed for a group of Boston Consulting Group consultants — so an elite consulting company — we did a lot of work with them. And one of the experiments we did was we created a task where the A.I. would be confident but wrong. And when we gave people that task to do, and they had access to A.I., they got the task wrong more often than people who didn’t use A.I., because the A.I. misled them, because they fell asleep at the wheel. And all the early research we have on A.I. use suggests that when A.I.s get good enough, we just stop paying attention.

But doesn’t this make them unreliable in a very tricky way? 80 percent — you’re, like, it’s always hallucinating. 20 percent, 5 percent, it’s enough that you can easily be lulled into overconfidence. And one of the reasons it’s really tough here is you’re combining something that knows how to seem extremely persuasive and confident — you feed into the A.I. a 90-page paper on functions and characteristics of right wing populism in Europe, as I did last night.

And within seconds, basically, you get a summary out. And the summary certainly seems confident about what’s going on. But on the other hand, you really don’t know if it’s true. So for a lot of what you might want to use it for, that is unnerving.

Absolutely, and I think hard to grasp, because we’re used to things like type II errors, where we search for something on the internet and don’t find it. We’re not used to type I errors, where we search for something and get an answer back that’s made up. This is a challenge. And there’s a couple things to think about. One of those is — I advocate the BAH standard, best available human. So is the A.I. more or less accurate than the best human you could consult in that area?

And what does that mean for whether or not it’s an appropriate question to ask? And that’s something that we kind of have to judge collectively. It’s valuable to have these studies being done by law professors and medical professionals and people like me and my colleagues in management. They’re trying to understand, how good is the A.I.? And the answer is pretty good, right? So it makes mistakes. “Does it make more or less mistakes than a human” is probably a question we should be asking a lot more.

And the second thing is the kind of tasks that you judge it for. I absolutely agree with you. When summarizing information, it may make errors. Less than an intern you assign to it is an open question, but you have to be aware of that error rate. And that goes back to the 10 hour question. The more you use these A.I.s, the more you start to know when to be suspicious and when not to be. That doesn’t mean you’re eliminating errors.

But just like if you assigned it to an intern, and you’re, like, this person has a sociology degree. They’re going to do a really good job summarizing this, but their biases are going to be focused on the sociological facts and not the political facts. You start to learn these things. So I think, again, that person model helps, because you don’t expect 100 percent reliability out of a person. And that changes the kind of tasks you delegate.

But it also reflects something interesting about the nature of the systems. You have a quote here that I think is very insightful. You wrote, “the core irony of generative A.I.s is that A.I.s were supposed to be all logic and no imagination. Instead, we get A.I.s that make up information, engage in seemingly emotional discussions, and which are intensely creative.” And that last fact is one that makes many people deeply uncomfortable.

There is this collision between what a computer is in our minds and then this strange thing we seem to have invented, which is an entity that emerges out of language, an entity that almost emerges out of art. This is the thing I have the most trouble keeping in my mind, that I need to use the A.I. as an imaginative, creative partner and not as a calculator that uses words.

I love the phrase “a calculator that uses words.” I think we have been let down by science fiction, both in the utopias and apocalypses that A.I. might bring, but also, even more directly, in our view of how machines should work. People are constantly frustrated, and give the same kinds of tests to A.I.s over and over again, like doing math, which it doesn’t do very well — they’re getting better at this.

And on the other hand, saying, well, creativity is a uniquely human spark that we can’t touch, and that A.I., on any creativity test we give it — which, again, are all limited in different ways, blows out humans in almost all measures of creativity that we have. Or all the measures are bad, but that still means something.

But we were using those measures five years ago, even though they were bad. That’s a point you make that I think is interesting and slightly unsettling.

Yeah, we never had to differentiate humans from machines before. It was always easy. So the idea that we had to have a scale that worked for people and machines, who had that? We had the Turing test, which everyone knew was a terrible idea. But since no machine could pass it, it was completely fine. So the question is, how do we measure this? This is an entirely separate set of issues. Like, we don’t even have a definition of sentience or consciousness.

And I think that you’re exactly right on the point, being that we are not ready for this kind of machine, so our intuition is bad.

So one of the things I will sometimes do, and did quite recently, is give the A.I. a series of personal documents, emails I wrote to people I love that were very descriptive of a particular moment in my life. And then I will ask the A.I. about them, or ask the A.I. to analyze me off of them.

And sometimes, it’s a little breathtaking. Almost every moment of true metaphysical shock — to use a term somebody else gave me — I’ve had here has been relational, at how good the A.I. can be — almost like a therapist, right? Sometimes it will see things, the thing I am not saying, in a letter, or in a personal problem. And it will zoom in there, right? It will give, I think, quicker and better feedback in an intuitive way that is not simply mimicking back what I said and is dealing with a very specific situation. It will do better than people I speak to in my life around that.

Conversely, I’m going to read a bit of it later. I tried mightily to make Claude 3 a useful partner in prepping to speak to you, and also in prepping for another podcast recently. And I functionally never have a moment there where I’m all that impressed.

That makes complete sense. I think the weird expectations — we call it the jagged frontier of A.I., that it’s good at some stuff and bad at other stuff. It’s often unexpected. It can lead to these weird moments of disappointment, followed by elation or surprise. And part of the reason why I advocate for people to use it in their jobs is, it isn’t going to outcompete you at whatever you’re best at. I mean, I cannot imagine it’s going to do a better job prepping someone for an interview than you’re doing. And that’s not me just — I’m trying to be nice to you because you’re interviewing me, but because you’re a good interviewer. You’re a famous interviewer. It’s not going to be as good as that. Now, there’s questions about how good these systems get that we don’t know, but we’re kind of at a weirdly comfortable spot in A.I., which is, maybe it’s the 80th percentile of many performances. But I talk to Hollywood writers. It’s not close to writing like a Hollywood writer. It’s not close to being as good an analyst.

It’s not — but it’s better than the average person. And so it’s great as a supplement to weakness, but not to strength. But then, we run back into the problem you talked about, which is, in my weak areas, I have trouble assessing whether the A.I. is accurate or not. So it really becomes sort of a eating its own tail kind of problem.

But this gets to this question of, what are you doing with it? The A.I.s right now seem much stronger as amplifiers and feedback mechanisms and thought partners for you than they do as something you can really outsource your hard work and your thinking to. And that, to me, is one of the differences between trying to spend more time with these systems — like, when you come into them initially, you’re like, OK, here’s a problem, give me an answer.

Whereas when you spend time with them, you realize actually what you’re trying to do with the A.I. is get it to elicit a better answer from you.

And that’s why the book’s called “Co-Intelligence.” For right now, we have a prosthesis for thinking. That’s, like, new in the world. We haven’t had that before — I mean, coffee, but aside from that, not much else. And I think that there’s value in that. I think learning to be partner with this, and where it can get wisdom out of you or not — I was talking to a physics professor at Harvard. And he said, all my best ideas now come from talking to the A.I. And I’m like, well, it doesn’t do physics that well. He’s like, no, but it asks good questions. And I think that there is some value in that kind of interactive piece.

It’s part of why I’m so obsessed with the idea of A.I. in education, because a good educator — and I’ve been working on interactive education skill for a long time — a good educator is eliciting answers from a student. And they’re not telling students things.

So I think that that’s a really nice distinction between co-intelligence, and thought partner, and doing the work for you. It certainly can do some work for you. There’s tedious work that the A.I. does really well. But there’s also this more brilliant piece of making us better people that I think is, at least in the current state of A.I., a really awesome and amazing thing.

We’ve already talked a bit about — Gemini is helpful, and ChatGPT-4 is neutral, and Claude is a bit warmer. But you urge people to go much further than that. You say to give your A.I. a personality. Tell it who to be. So what do you mean by that, and why?

So this is actually almost more of a technical trick, even though it sounds like a social trick. When you think about what A.I.s have done, they’ve trained on the collective corpus of human knowledge. And they know a lot of things. And they’re also probability machines. So when you ask for an answer, you’re going to get the most probable answer, sort of, with some variation in it. And that answer is going to be very neutral. If you’re using GPT-4, it’ll probably talk about a rich tapestry a lot. It loves to talk about rich tapestries. If you ask it to code something artistic, it’ll do a fractal. It does very normal, central A.I. things. So part of your job is to get the A.I. to go to parts of this possibility space where the information is more specific to you, more unique, more interesting, more likely to spark something in you yourself. And you do that by giving it context, so it doesn’t just give you an average answer. It gives you something that’s specialized for you. The easiest way to provide context is a persona. You are blank. You are an expert at interviewing, and you answer in a warm, friendly style. Help me come up with interview questions. It won’t be miraculous in the same way that we were talking about before. If you say you’re Bill Gates, it doesn’t become Bill Gates. But that changes the context of how it answers you. It changes the kinds of probabilities it’s pulling from and results in much more customized and better results.

OK, but this is weirder, I think, than you’re quite letting on here. So something you turned me on to is there’s research showing that the A.I. is going to perform better on various tasks, and differently on them, depending on the personality. So there’s a study that gives a bunch of different personality prompts to one of the systems, and then tries to get it to answer 50 math questions. And the way it got the best performance was to tell the A.I. it was a Starfleet commander who was charting a course through turbulence to the center of an anomaly.

But then, when it wanted to get the best answer on 100 math questions, what worked best was putting it in a thriller, where the clock was ticking down. I mean, what the hell is that about?

“What the hell” is a good question. And we’re just scratching the surface, right? There’s a nice study actually showing that if you emotionally manipulate the A.I., you get better math results. So telling it your job depends on it gets you better results. Tipping, especially $20 or $100 — saying, I’m about to tip you if you do well, seems to work pretty well. It performs slightly worse in December than May, and we think it’s because it has internalized the idea of winter break.

I’m sorry, what?

Well, we don’t know for sure, but —

I’m holding you up here.

People have found the A.I. seems to be more accurate in May, and the going theory is that it has read enough of the internet to think that it might possibly be on vacation in December?

So it produces more work with the same prompts, more output, in May than it does in December. I did a little experiment where I would show it pictures of outside. And I’m like, look at how nice it is outside? Let’s get to work. But yes, the going theory is that it has internalized the idea of winter break and therefore is lazier in December.

I want to just note to people that when ChatGPT came out last year, and we did our first set of episodes on this, the thing I told you was this was going to be a very weird world. What’s frustrating about that is that — I guess I can see the logic of why that might be. Also, it sounds probably completely wrong, but also, I’m certain we will never know. There’s no way to go into the thing and figure that out.

But it would have genuinely never occurred to me before this second that there would be a temporal difference in the amount of work that GPT-4 would do on a question held constant over time. Like, that would have never occurred to me as something that might change at all.

And I think that that is, in some ways, both — as you said, the deep weirdness of these systems. But also, there’s actually downside risks to this. So we know, for example, there is an early paper from Anthropic on sandbagging, that if you ask the A.I. dumber questions, it would get you less accurate answers. And we don’t know the ways in which your grammar or the way you approach the A.I. — we know the amount of spaces you put gets different answers.

So it is very hard, because what it’s basically doing is math on everything you’ve written to figure out what would come next. And the fact that what comes next feels insightful and humane and original doesn’t change that that’s what the math that’s doing is. So part of what I actually advise people to do is just not worry about it so much, because I think then it becomes magic spells that we’re incanting for the A.I. Like, I will pay you $20, you are wonderful at this. It is summer. Blue is your favorite color. Sam Altman loves you. And you go insane.

So acting with it conversationally tends to be the best approach. And personas and contexts help, but as soon as you start evoking spells, I think we kind of cross over the line into, “who knows what’s happening here?”

Well, I’m interested in the personas, although I just — I really find this part of the conversation interesting and strange. But I’m interested in the personalities you can give the A.I. for a different reason. I prompted you around this research on how a personality changes the accuracy rate of an A.I. But a lot of the reason to give it a personality, to answer you like it is Starfleet Commander, is because you have to listen to the A.I. You are in relationship with it.

And different personas will be more or less hearable by you, interesting to you. So you have a piece on your newsletter which is about how you used the A.I. to critique your book. And one of the things you say in there, and give some examples of, is you had to do so in the voice of Ozymandias because you just found that to be more fun. And you could hear that a little bit more easily.

So could you talk about that dimension of it, too, making the A.I. not just prompting you to be more accurate, but giving it a personality to be more interesting to you?

The great power of A.I. is as a kind of companion. It wants to make you happy. It wants to have a conversation. And that can be overt or covert.

So, to me, actively shaping what I want the A.I. to act like, telling it to be friendly or telling it to be pompous, is entertaining, right? But also, it does change the way I interact with it. When it has a pompous voice, I don’t take the criticism as seriously. So I can think about that kind of approach. I could get pure praise out of it, too, if I wanted to do it that way.

But the other factor that’s also super weird, while we’re on the way of super weird A.I. things, is that if you don’t do that, it’s going to still figure something out about you. It is a cold reader. And I think a lot about the very famous piece by Kevin Roose, the New York Times technology reporter, about Bing about a year ago, when Bing, which was GPT-4 powered, came out and had this personality of Sydney.

And Kevin has this very long description that got published in The New York Times about how Sydney basically threatened him, and suggested he leaves his wife, and very dramatic, kind of very unsettling interaction. And I was working with — I didn’t have anything quite that intense, but I got into arguments with Sydney around the same time, where it would — when I asked her to do work for me, it said you should do the work yourself. Otherwise, it’s dishonest. And it kept accusing me of plagiarism, which felt really unusual.

But the reason why Kevin ended up in that situation is the A.I. knows all kinds of human interactions and wants to slot into a story with you.

So a great story is jealous lover who’s gone a little bit insane, and the man who won’t leave his wife, or student and teacher, or two debaters arguing with each other, or grand enemies. And the A.I. wants to do that with you. So if you’re not explicit, it’s going to try and find a dialogue.

And I’ve noticed, for example, that if I talk to the A.I. and I imply that we’re having a debate, it will never agree with me. If I imply that I’m a teacher and it’s a student, even as much as saying I’m a professor, it is much more pliable.

So part of why I like assigning a personality is to have an explicit personality you’re operating with, so it’s not trying to cold read and guess what personality you’re looking for.

Kevin and I have talked a lot about that conversation with Sydney. And one of the things I always found fascinating about it is, to me, it revealed an incredibly subtle level of read by Sydney Bing, which is, what was really happening there? When you say the A.I. wants to make you happy, it has to read on some level what it is you’re really looking for, over time.

And what was Kevin? What is Kevin? Kevin is a journalist. And Kevin was nudging and pushing that system to try to do something that would be a great story. And it did that. It understood, on some level — again, the anthropomorphizing language there. But it realized that Kevin wanted some kind of intense interaction. And it gave him, like, the greatest A.I. story anybody has ever been given. I mean, an A.I. story that we are still talking about a year later, an A.I. story that changed the way A.I.s were built, at least for a while.

And people often talked about what Sydney was revealing about itself. But to me, what was always so unbelievably impressive about that was its ability to read the person, and its ability to make itself into the thing, the personality, the person was trying to call forth.

And now, I think we’re more practiced at doing this much more directly. But I think a lot of people have their moment of sleeplessness here. That was my Rubicon on this. I didn’t know something after that I didn’t know before it in terms of capabilities.

But when I read that, I thought that the level of — interpersonal isn’t the right word, but the level of subtlety it was able to display in terms of giving a person what it wanted, without doing so explicitly — right, without saying, “we’re playing this game now,” was really quite remarkable.

It’s a mirror. I mean, it’s trained on our stuff. And one of the revealing things about that, that I think we should be paying a lot more attention to, is the fact that because it’s so good at this, right now, none of the frontier A.I. models with the possible exception of Inflection’s Pi, which has been basically acquired in large part by Microsoft now, were built to optimize around keeping us in a relationship with the A.I. They just accidentally do that. There are other A.I. models that aren’t as good that have been focused on this, but that has been something explicit from the frontier models they’ve been avoiding till now. Claude sort of breaches that line a little bit, which is part of why I think it’s engaging. But I worry about the same kind of mechanism that inevitably reined in social media, which is, you can make a system more addictive and interesting. And because it’s such a good cold reader, you could tune A.I. to make you want to talk to it more.

It’s very hands off and sort of standoffish right now. But if you use the voice system in ChatGPT-4 on your phone, where you’re having a conversation, there’s moments where you’re like, oh, you feel like you’re talking to a person. You have to remind yourself. So to me, that persona aspect is both its great strength, but also one of the things I’m most worried about that isn’t a sort of future science fiction scenario.

I want to hold here for a minute, because we’ve been talking about how to use frontier models, I think implicitly talking about how to use A.I. for work. But the way that a lot of people are using it is using these other companies that are explicitly building for relationships. So I’ve had people at one of the big companies tell me that if we wanted to tune our system relationally, if we wanted to tune it to be your friend, your lover, your partner, your therapist, like, we could blow the doors off that. And we’re just not sure it’s ethical.

But there are a bunch of people who have tens of millions of users, Replika, Character.AI, which are doing this. And I tried to use Replika about six, eight months ago. And honestly, I found it very boring. They had recently lobotomized it because people were getting too erotic with their Replikants. But I just couldn’t get into it. I’m probably too old to have A.I. friends, in the way that my parents were probably too old to get really in to talking to people on AOL Instant Messenger.

But I have a five-year-old, and I have a two-year-old. And by the time my five-year-old is 10 and my two-year-old is 7, they’re not necessarily going to have the weirdness I’m going to have about having A.I. friends. And I don’t think we even have any way to think about this.

I think that is an absolute near-term certainty, and sort of an unstoppable one, that we are going to have A.I. relationships in a broader sense. And I think the question is, just like we’ve just been learning — I mean, we’re doing a lot of social experiments at scale we’ve never done before in the last couple of decades, right? Turns out social media brings out entirely different things in humans that we weren’t expecting. And we’re still writing papers about echo chambers and tribalism and facts, and what we agree or disagree with. We’re about to have another wave of this. And we have very little research. And you could make a plausible story up, that what’ll happen is it’ll help mental health in a lot of ways for people, and then there’ll be more social outside, that there might be a rejection of this kind of thing.

I don’t know what’ll happen. But I do think that we can expect with absolute certainty that you will have A.I.s that are more interesting to talk to, and fool you into thinking, even if you know better, that they care about you in a way that is incredibly appealing. And that will happen very soon. And I don’t know how we’re going to adjust to it. But it seems inevitable, as you said.

I was worried we were getting off track in the conversation, but I realized we were actually getting deeper on the track I was trying to take us down.

We were talking about giving the A.I. personality, right — telling Claude 3, hey, I need you to act as a sardonic podcast editor, and then Claude 3’s whole persona changes. But when you talk about building your A.I. on Kindroid, on Character, on Replika — so I just created a Kindroid one the other day. And Kindroid is kind of interesting, because its basic selling point is we’ve taken the guardrails largely off. We are trying to make something that is not lobotomized, that is not perfectly safe for work. And so the personality can be quite unrestrained. So I was interested in what that would be like.

But the key thing you have to do at the beginning of that is tell the system what its personality is. So you can pick from a couple that are preset, but I wrote a long one myself — you know, you live in California. You’re a therapist. You like all these different things. You have a highly intellectual style of communicating. You’re extremely warm, but you like ironic humor. You don’t like small talk. You don’t like to say things that are boring or generic. You don’t use a lot of emoticons and emojis. And so now it talks to me the way people I talk to talk.

And the thing I want to bring this back to is that one of the things that requires you to know is what kind of personalities work with you, for you to know yourself and your preferences a little bit more deeply.

I think that’s a temporary state of affairs, like extremely temporary. I think a GPT-4 class model — we actually already know this. They can guess your intent quite well. And I think that this is a way of giving you a sense of agency or control in the short term. I don’t think you’re going to need to know yourself at all. And I think you wouldn’t right now if any of the GPT-4 class models allowed themselves to be used in this way, without guardrails, which they don’t, I think you would already find it’s just going to have a conversation with you and morph into what you want.

I think that for better or worse, the “insight” in these systems is good enough that way. It’s sort of why I also don’t worry so much about prompt crafting in the long term, to go back to the other issue we were talking about, because I think that they will work on intent. And there’s a lot of evidence that they’re good at guessing intent. So I like this period, because I think it does value self reflection. And our interaction with the A.I. is somewhat intentional because we can watch this interaction take place.

But I think there’s a reason why some of the worry you hear out of the labs is about superhuman levels of manipulation. There’s a reason why the whistleblower from Google was all about that — sort of fell for the chat bot, and that’s why they felt it was alive. Like, I think we’re deeply trickable in this way. And A.I. is really good at figuring out what we want without us being explicit.

So that’s a little bit chilling, but I’m nevertheless going to stay in this world we’re in, because I think we’re going to be in it for at least a little while longer, where you do have to do all this prompt engineering. What is a prompt, first? And what is prompt engineering?

So a prompt is — technically, it is the sentence, the command you’re putting into the A.I. What it really is is the beginning part of the A.I.s text that it’s processing. And then it’s just going to keep adding more words or tokens to the end of that reply, until it’s done. So a prompt is the command you’re giving the A.I. But in reality, it’s sort of a seed from which the A.I. builds.

And when you prompt engineer, what are some ways to do that? Maybe one to begin with, because it seems to work really well, is chain of thought.

Just to take a step back, A.I. prompting remains super weird. Again, strange to have a system where the companies making the systems are writing papers as they’re discovering how to use the systems, because nobody knows how to make them work better yet. And we found massive differences in our experiments on prompt types. So for example, we were able to get the A.I. to generate much more diverse ideas by using this chain of thought approach, which we’ll talk about.

But also, it turned out to generate a lot better ideas if you told it it was Steve Jobs than if you told it it was Madame Curie. And we don’t know why. So there’s all kinds of subtleties here. But the idea, basically, of chain of thought, that seems to work well in almost all cases, is that you’re going to have the A.I. work step by step through a problem. First, outline the problem, you know, the essay you’re going to write. Second, give me the first line of each paragraph. Third, go back and write the entire thing. Fourth, check it and make improvements.

And what that does is — because the A.I. has no internal monologue, it’s not thinking. When the A.I. isn’t writing something, there’s no thought process. All it can do is produce the next token, the next word or set of words. And it just keeps doing that step by step. Because there’s no internal monologue, this in some ways forces a monologue out in the paper. So it lets the A.I. think by writing before it produces the final result. And that’s one of the reasons why chain of thought works really well.

So just step-by-step instructions is a good first effort.

Then you get an answer, and then what?

And then — what you do in a conversational approach is you go back and forth. If you want work output, what you’re going to do is treat it like it is an intern who just turned in some work to you. Actually, could you punch up paragraph two a little bit? I don’t like the example in paragraph one. Could you make it a little more creative, give me a couple of variations? That’s a conversational approach trying to get work done.

If you’re trying to play, you just run from there and see what happens. You can always go back, especially with a model like GPT-4, to an earlier answer, and just pick up from there if your heads off in the wrong direction.

So I want to offer an example of how this back and forth can work. So we asked Claude 3 about prompt engineering, about what we’re talking about here. And the way it described it to us is, quote, “It’s a shift from the traditional paradigm of human-computer interaction, where we input explicit commands and the machine executes them in a straightforward way, to a more open ended, collaborative dialogue, where the human and the A.I. are jointly shaping the creative process,” end quote. And that’s pretty good, I think. That’s interesting. It’s worth talking about. I like that idea that it’s a more collaborative dialogue. But that’s also boring, right? Even as I was reading it, it’s a mouthful. It’s wordy. So I kind of went back and forth with it a few times. And I was saying, listen, you’re a podcast editor. You’re concise, but also then I gave it a couple examples of how I punched up questions in the document, right? This is where the question began. Here’s where it ended. And then I said, try again, and try again, and try again, and make it shorter. And make it more concise.

And I got this: quote, “OK, so I was talking to this A.I., Claude, about prompt engineering, you know, this whole art of crafting prompts to get the best out of these A.I. models. And it said something that really struck me. It called prompt engineering a new meta skill that we’re all picking up as we play with A.I., kind of like learning a new language to collaborate with it instead of just bossing it around. What do you think, is prompt engineering the new must have skill?” End Claude.

And that second one, I have to say, is pretty damn good. That really nailed the way I speak in questions. And it gets it at this way where if you’re willing to go back and forth, it does learn how to echo you.

So I am at a loss about when you went to Claude and when it was you, to be honest. So I was ready to answer at like two points along the way, so that was pretty good from my perspective, sitting here, talking to you. That felt interesting, and felt like the conversation we’ve been having. And I think there’s a couple of interesting lessons there.

The first, by the way, of — interestingly, you asked A.I. about one of its weakest points, which is about A.I. And everybody does this, but because its knowledge window doesn’t include that much stuff about A.I., it actually is pretty weak in terms of knowing how to do good prompting, or what a prompt is, or what A.I.s do well. But you did a good job with that. And I love that you went back and forth and shaped it. One of the techniques you used to shape it, by the way, was called few-shot, which is giving an example. So the two most powerful techniques are chain of thought, which we just talked about, and few-shot, giving it examples. Those are both well supported in the literature. And then, I’d add personas. So we’ve talked about, I think, the basics of prompt crafting here overall. And I think that the question was pretty good.

But you keep wanting to not talk about the future. And I totally get that. But I think when we’re talking about learning something, where there is a lag, where we talk about policy — should prompt crafting be taught in schools? I think it matters to think six months ahead. And again, I don’t think a single person in the A.I. labs I’ve ever talked to thinks prompt crafting for most people is going to be a vital skill, because the A.I. will pick up on the intent of what you want much better.

One of the things I realized trying to spend more time with the A.I. is that you really have to commit to this process. You have to go back and forth with it a lot. If you do, you can get really good questions, like the one I just did — or, I think, really good outcomes. But it does take time.

And I guess in a weird way it’s like the same problem of any relationship, that it’s actually hard to state your needs clearly and consistently and repeatedly, sometimes because you have not even articulated them in words yourself. At least the A.I., I guess, doesn’t get mad at you for it.

But I’m curious if you have advice, either at a practical level or principles level, about how to communicate to these systems what you want from them.

One set of techniques that work quite well is to speed run to where you are in the conversation. So you can actually pick up an older conversation where you got the A.I.‘s mindset where you want and work from there. You can even copy and paste that into a new window. You can ask the A.I. to summarize where you got in that previous conversation, and the tone the A.I. was taking, and then when you give a new instruction say the interaction I like to have with you is this, so have it solve the problem for you by having it summarize the tone that you happen to like at the end.

So there are a bunch of ways of building on your work as you start to go forward, so you’re not starting from scratch every time. And I think you’ll start to get shorthands that get you to that right kind of space. For me, there are chats that I pick up on. And actually, I assign these to my students too. I have some ongoing conversations that they’re supposed to have with the A.I., but then there’s a lot of interactions they’re supposed to have that are one off.

So you start to divide the work into, this is a work task. And we’re going to handle this in a single chat conversation. And then I’m going to go back to this long standing discussion when I want to pick it up, and it’ll have a completely different tone. So I think in some ways, you don’t necessarily want convergence among all your A.I. threads. You kind of want them to be different from each other.

You did mention something important there, because they’re already getting much bigger in terms of how much information they can hold. Like, the earlier generations could barely hold a significant chat. Now, Claude 3 can functionally hold a book in its memory. And it’s only going to go way, way, way up from here. And I know I’ve been trying to keep us in the present, but this feels to me really quickly like where this is both going and how it’s going to get a lot better.

I mean, you imagine Apple building Siri 2030, and Siri 2030 scanning your photos and your Journal app — Apple now has a Journal app. You have to assume they’re thinking about the information they can get from that, if you allow it — your messages, anything you’re willing to give it access to. It then knows all of this information about you, keeps all of that in its mind as it talks to you and acts on your behalf. I mean, that really seems to me to be where we’re going, an A.I. that you don’t have to keep telling it who to be because it knows you intimately and is able to hold all that knowledge all at the same time constantly.

It’s not even going there. Like, it’s already there. Gemini 1.5 can hold an entire movie, books. But like, it starts to now open up entirely new ways of working. I can show it a video of me working on my computer, just screen capture. And it knows all the tasks I’m doing and suggests ways to help me out. It starts watching over my shoulder and helping me. I put in all of my work that I did prior to getting tenure and said, write my tenure statement. Use exact quotes.

And it was much better than any of the previous models because it wove together stuff, and because everything was its memory. It doesn’t hallucinate as much. All the quotes were real quotes, and not made up. And already, by the way, GPT-4 has been rolling out a model of ChatGPT that has a private note file the A.I. takes — you can access it — but it takes notes on you as it goes along, about things you liked or didn’t like, and reads those again at the beginning of any chat. So this is present, right? It’s not even in the future.

And Google also connects to your Gmail, so it’ll read through your Gmail. I mean, I think this idea of a system that knows you intimately, where you’re picking up a conversation as you go along, is not a 2030 thing. It is a 2024 thing if you let the systems do it.

One thing that feels important to keep in front of mind here is that we do have some control over that. And not only do we have some control over it, but business models and policy are important here. And one thing we know from inside these A.I. shops is these A.I.s already are, but certainly will be, really super persuasive.

And so if the later iterations of the A.I. companions are tuned on the margin to try to encourage you to be also out in the real world, that’s going to matter, versus whether they have a business model that all they want is for you to spend a maximum amount of time talking to your A.I. companion, whether you ever have a friend who is flesh and blood be damned. And so that’s an actual choice, right? That’s going to be a programming decision. And I worry about what happens if we leave that all up to the companies, right? At some point, there’s a lot of venture capital money in here right now. At some point, the venture capital runs out. At some point, people need to make big profits. At some point, they’re in competition with other players who need to make profits. And that’s when things — you get into what Cory Doctorow calls the “enshitification” cycle, where things that were once adding a lot of value to the user begin extracting a lot of value to the user.

These systems, because of how they can be tuned, can lead to a lot of different outcomes. But I think we’re going to have to be much more comfortable than we’ve been in the past deciding what we think is a socially valuable use and what we think is a socially destructive use.

I absolutely agree. I think that we have agency here. We have agency in how we operate this in businesses, and whether we use this in ways that encourage human flourishing and employees, or are brutal to them. And we have agency over how this works socially. And I think we abrogated that responsibility with social media, and that is an example. Not to be bad news, because I generally have a lot of mixed optimism and pessimism about parts of A.I., but the bad news piece is there are open source models out there that are quite good.

The internet is pretty open. We would have to make some pretty strong choices to kill A.I. chat bots as an option. We certainly can restrict the large American companies from doing that, but a Llama 2 or Llama 3 is going to be publicly available and very good. There’s a lot of open source models. So the question also is how effective any regulation will be, which doesn’t mean we shouldn’t regulate it.

But there’s also going to need to be some social decisions being made about how to use these things well as a society that are going to have to go beyond just the legal piece, or companies voluntarily complying.

I see a lot of reasons to be worried about the open source models. And people talk about things like bioweapons and all that. But for some of the harms I’m talking about here, if you want to make money off of American kids, we can regulate you. So sometimes I feel like we almost, like, give up the fight before it begins. But in terms of what a lot of people are going to use, if you want to be having credit card payments processed by a major processor, then you have to follow the rules.

I mean, individual people or small groups can do a lot of weird things with an open source model, so that doesn’t negate every harm. But if you’re making a lot of money, then you have relationships we can regulate.

I couldn’t agree more. And I don’t think there’s any reason to give up hope on regulation. I think that we can mitigate. And I think part of our job, though, is also not just to mitigate the harms, but to guide towards the positive viewpoints, right? So what I worry about is that the incentive for profit making will push for A.I. that acts informally as your therapist or your friend, while our worries about experimentation, which are completely valid, are slowing down our ability to do experiments to find out ways to do this right. And I think it’s really important to have positive examples, too. I want to point to the A.I. systems acting ethically as your friend or companion, and figure out what that is, so there’s a positive model to look for. So I’m not just — this is not to denigrate the role of regulation, which I think is actually going to be important here, and self regulation, and rapid response from government, but also the companion problem of, “we need to make some sort of decisions about what are the paragons of this, what is acceptable as a society?”

So I want to talk a bit about another downside here, and this one more in the mainstream of our conversation, which is on the human mind, on creativity. So a lot of the work A.I. is good at automating is work that is genuinely annoying, time consuming, laborious, but often plays an important role in the creative process. So I can tell you that writing a first draft is hard, and that work on the draft is where the hard thinking happens.

And it’s hard because of that thinking. And the more we outsource drafting to A.I., which I think it is fair to say is a way a lot of people intuitively use it — definitely, a lot of students want to use it that way — the fewer of those insights we’re going to have on those drafts. Look, I love editors. I am an editor in one respect. But I can tell you, you make more creative breakthroughs as a writer than an editor. The space for creative breakthrough is much more narrow once you get to editing.

And I do worry that A.I. is going to make us all much more like editors than like writers.

I think the idea of struggle is actually a core one in many things. I’m an educator. And one thing that keeps coming out in the research is that there is a strong disconnect between what students think they’re learning and when they learn. So there was a great controlled experiment at Harvard in intro science classes, where students either went to a pretty entertaining set of lectures, or else they were forced to do active learning, where they actually did the work in class.

The active learning group reported being unhappier and not learning as much, but did much better on tests, because when you’re confronted with what you don’t know, and you have to struggle, when you feel, like, bad, you actually make much more progress than if someone spoon feeds you an entertaining answer. And I think this is a legitimate worry that I have. And I think that there’s going to have to be some disciplined approach to writing as well, like, I don’t use the A.I.

Not just because, by the way, it makes the work easier, but also because you mentally anchor on the A.I.‘s answer. And in some ways, the most dangerous A.I. application, in my mind, is the fact that you have these easy co-pilots in Word and Google Docs, because any writer knows about the tyranny of the blank page, about staring at a blank page and not knowing what to do next, and the struggle of filling that up. And when you have a button that produces really good words for you, on demand, you’re just going to do that. And it’s going to anchor your writing. We can teach people about the value of productive struggle, but I think that during the school years, we have to teach people the value of writing — not just assign an essay and assume that the essay does something magical, but be very intentional about the writing process and how we teach people about how to do that, because I do think the temptation of what I call “the button” is going to be there otherwise, for everybody.

But I worry this stretches, I mean, way beyond writing. So the other place I worry about this, or one of the other places I worry about this a lot, is summarizing. And I mean, this goes way back. When I was in school, you could buy Sparknotes. And they were these little, like, pamphlet sized descriptions of what’s going on in “War and Peace” or what’s going on in “East of Eden.”

And reading the Sparknotes often would be enough to fake your way through the test, but it would not have any chance, like, not a chance, of changing you, of shifting you, of giving you the ideas and insights that reading “Crime and Punishment” or “East of Eden” would do.

And one thing I see a lot of people doing is using A.I. for summary. And one of the ways it’s clearly going to get used in organizations is for summary — summarize my email, and so on.

And here too, one of the things that I think may be a real vulnerability we have, as we move into this era — my view is that the way we think about learning and insights is usually wrong. I mean, you were saying a second ago we can teach a better way. But I think we’re doing a crap job of it now, because I think people believe that — it’s sort of what I call the matrix theory of the human mind, if you could just jack the information into the back of your head and download it, you’re there.

But what matters about reading a book, and I see this all the time preparing for this show, is the time you spend in the book, where over time, like, new insights and associations for you begin to shake loose. And so I worry it’s coming into an efficiency-obsessed educational and intellectual culture, where people have been imagining forever, what if we could do all this without having to spend any of the time on it? But actually, there’s something important in the time.

There’s something important in the time with a blank page, with the hard book. And I don’t think we lionize intellectual struggle. In some ways, I think we lionize the people for whom it does not seem like a struggle, the people who seem to just glide through and be able to absorb the thing instantly, the prodigies. And I don’t know. When I think about my kids, when I think about the kind of attention and creativity I want them to have, this is one of the things that scares me most, because kids don’t like doing hard things a lot of the time.

And it’s going to be very hard to keep people from using these systems in this way.

So I don’t mean to push back too much on this.

No, please, push back a lot.

But I think you’re right.

Imagine we’re debating and you are a snarky. A.I. [LAUGHS]

Fair enough. With that prompt —

With that prompt engineering.

— yeah, I mean, I think that this is the eternal thing about looking back on the next generation, we worry about technology ruining them. I think this makes ruining easier. But as somebody who teaches at universities, like, lots of people are summarizing. Like, I think those of us who enjoy intellectual struggle are always thinking everybody else is going through the same intellectual struggle when they do work. And they’re doing it about their own thing. They may or may not care the same way.

So this makes it easier, but before A.I., there were — best estimates from the U.K. that I could find, 20,000 people in Kenya whose full time job was writing essays for students in the U.S. and U.K. People have been cheating and Sparknoting and everything for a long time. And I think that what people will have to learn is that this tool is a valuable co-intelligence, but is not a replacement for your own struggle.

And the people who found shortcuts will keep finding shortcuts. Temptation may loom larger, but I can’t imagine that — my son is in high school, doesn’t like to use A.I. for anything. And he just doesn’t find it valuable for the way he’s thinking about stuff. I think we will come to that kind of accommodation. I’m actually more worried about what happens inside organizations than I am worried about human thought, because I don’t think we’re going to atrophy as much as we think. I think there’s a view that every technology will destroy our ability to think.

And I think we just choose how to use it or not. Like, even if it’s great at insights, people who like thinking like thinking.

Well, let me take this from another angle. One of the things that I’m a little obsessed with is the way the internet did not increase either domestic or global productivity for any real length of time. So I mean, it’s a very famous line. You can see the IT revolution anywhere but in the productivity statistics. And then you do get, in the ‘90s, a bump in productivity that then peters out in the 2000s.

And if I had told you what the internet would be, like, I mean everybody, everywhere would be connected to each other. You could collaborate with anybody, anywhere, instantly. You could teleconference. You would have access to, functionally, the sum total of human knowledge in your pocket at all times. I mean, all of these things that would have been genuine sci-fi, you would have thought would have been — led to a kind of intellectual utopia. And it kind of doesn’t do that much, if you look at the statistics.

You don’t see a huge step change. And my view — and I’d be curious for your thoughts on this, because I know this is the area you study in — my view is it everything we said was good happened. I mean, as a journalist, Google and things like that make me so much more productive. It’s not that it didn’t give us the gift. It’s that it also had a cost — distraction, checking your email endlessly, being overwhelmed with the amount of stuff coming into you, the sort of endless communication task list, the amount of internal communications and organizations, now with Slack and everything else.

And so some of the time that was given to us back was also taken back. And I see a lot of dynamics like this that could play out with A.I. — I wouldn’t even just say if we’re not careful, I just think they will play out and already are. I mean, the internet is already filling with mediocre crap generated by A.I. There is going to be a lot of destructive potential, right? You are going to have your sex bot in your pocket, right? There’s a million things — and not just that, but inside organizations, there’s going to be people padding out what would have been something small, trying to make it look more impressive by using the A.I. to make something bigger. And then, you’re going to use the A.I. to summarize it back down. The A.I. researcher, Jonathan Frankel, described this to me as, like, the boring apocalypse version of A.I., where you’re just endlessly inflating and then summarizing, and then inflating and then summarizing the volume of content between different A.I.

My ChatGPT is making my presentation bigger and more impressive, and your ChatGPT is trying to summarize it down to bullet points for you. And I’m not saying this has to happen. But I am saying that it would require a level of organizational and cultural vigilance to stop, that nothing in the internet era suggests to me that we have.

So I think there’s a lot there to chew on. And I also have spent a lot of time trying to think about why the internet didn’t work as well. I was an early Wikipedia administrator.

Thank you for your service.

[LAUGHS] Yeah, it was very scarring. But I think a lot about this. And I think A.I. is different. I don’t know if it’s different in a positive way. And I think we talked about some of the negative ways it might be different. And I think it’s going to be many things at once, happening quite quickly. So I think the information environment’s going to be filled up with crap. We will not be able to tell the difference between true and false anymore. It will be an accelerant on all the kinds of problems that we have there.

On the other hand, it is an interactive technology that adapts to you. From an education perspective, I have lived through the entire internet will change education piece. I have MOOCs, massive online courses, with — quarter million people have taken them. And in the end, you’re just watching a bunch of videos. Like, that doesn’t change education.

But I can have an A.I. tutor that actually can teach you — and we’re seeing it happen — and adapt to you at your level of education, and your knowledge base, and explain things to you. But not just explain, elicit answers from you, interactively, in a way that actually learns things.

The thing that makes A.I. possibly great is that it’s so very human, so it interacts with our human systems in a way that the internet did not. We built human systems on top of it, but A.I. is very human. It deals with human forms and human issues and our human bureaucracy very well. And that gives me some hope that even though there’s going to be lots of downsides, that the upsides of productivity and things like that are real. Part of the problem with the internet is we had to digitize everything. We had to build systems that would make our offline world work with our online world. And we’re still doing that. If you go to business schools, digitizing is still a big deal 30 years on from early internet access. A.I. makes this happen much quicker because it works with us. So I’m a little more hopeful than you are about that, but I also think that the downside risks are truly real and hard to anticipate.

Somebody was just pointing out that Facebook is now 100 percent filled with algorithmically generated images that look like their actual grandparents, making things who are saying, like, what do you think of my work? Because that’s a great way to get engagement. And the other grandparents in there have no idea it’s A.I. generated.

Things are about to get very, very weird in all the ways that we talked about, but that doesn’t mean the positives can’t be there as well.

I think that is a good place to end. So always our final question, what are three books you’d recommend to the audience?

OK, so the books I’ve been thinking about are not all fun, but I think they’re all interesting. One of them is “The Rise and Fall of American Growth,” which is — it’s two things. It’s an argument about why we will never have the kind of growth that we did in the first part of the Industrial Revolution again, but I think that’s less interesting than the first half of the book, which is literally how the world changed between 1870 or 1890 and 1940, versus 1940 and 1990, or 2000.

And the transformation of the world that happened there — in 1890, no one had plumbing in the U.S.. And the average woman was carrying tons of water every day. And you had no news, and everything was local, and everyone’s bored all the time — to 1940, where the world looks a lot like today’s world, was fascinating. And I think it gives you a sense of what it’s like to be inside a technological singularity, and I think worth reading for that reason — or at least the first half.

The second book I’d recommend is “The Knowledge,” by Dartnell, which is a really interesting book. It is ostensibly almost a survival guide, but it is how to rebuild industrial civilization from the ground up, if we were to collapse. And I don’t recommend it as a survivalist. I recommend it because it is fascinating to see how complex our world is, and how many interrelated pieces we’ve managed to build up as a society. And in some ways, it gives me a lot of hope to think about how all of these interconnections work.

And then the third one is science fiction, and I was debating — I read a lot of science fiction, and there’s a lot of interesting A.I.s in science fiction. Everyone talks about — who’s in the science fiction world — Iain Banks, who wrote about the Culture, which is really interesting, about what it’s like to live beside super intelligent A.I. Vernor Vinge just died yesterday, when we were recording this, and wrote these amazing books about — he coined the term singularity.

But I want to recommend a much more depressing book that’s available for free, which is Peter Watts’s “Blindsight.” And it is not a fun book, but it is a fascinating thriller set on an interstellar mission to visit an alien race. And it’s essentially a book about sentience, and it’s a book about the difference between consciousness and sentience, and about intelligence and the different ways of perceiving the world in a setting where that is the sort of centerpiece of the thriller. And I think in a world where we have machines that might be intelligent without being sentient, it is a relevant, if kind of chilling, read.

Ethan Mollick, your book is called “Co-Intelligence.” Your Substack is One Useful Thing. Thank you very much.

This episode of “The Ezra Klein Show” was produced by Kristin Lin. Fact checking by Michelle Harris. Our senior engineer is Jeff Geld with additional mixing from Efim Shapiro. Our senior editor is Claire Gordon. The show’s production team also includes Annie Galvin and Rollin Hu. Original music by Isaac Jones. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser, and special thanks to Sonia Herrero.

EZRA KLEIN: From New York Times Opinion, this is “The Ezra Klein Show.”

ETHAN MOLLICK: Thanks for having me.

EZRA KLEIN: So let’s assume I’m interested in A.I. And I tried ChatGPT a bunch of times, and I was suitably impressed and weirded out for a minute. And so I know the technology is powerful. I’ve heard all these predictions about how it will take everything over, or become part of everything we do. But I don’t actually see how it fits into my life, really, at all. What am I missing?

ETHAN MOLLICK: So you’re not alone. This is actually very common. And I think part of the reason is that the way ChatGPT works isn’t really set up for you to understand how powerful it is. You really do need to use the paid version, they are significantly smarter. And you can almost think of this — like, GPT-3, which was — nobody really paid attention to when it came out, before ChatGPT, was about as good as a sixth grader at writing. GPT-3.5, the free version of ChatGPT, is about as good as a high school, or maybe even a college freshman or sophomore.

EZRA KLEIN: When you say, bring it to every table you’re at, one, that sounds like a big pain, because now I’ve got to add another step of talking to the computer constantly. But two, it’s just not obvious to me what that would look like. So what does it look like? What does it look like for you, or what does it look like for others — that you feel is applicable widely?

ETHAN MOLLICK: So I just finished this book. It’s my third book. I keep writing books, even though I keep forgetting that writing books is really hard. But this was, I think, my best book, but also the most interesting to write. And it was thanks to A.I. And there’s almost no A.I. writing in the book, but I used it continuously. So things that would get in the way of writing — I think I’m a much better writer than A.I. — hopefully, people agree. But there’s a lot of things that get in your way as a writer.

So I would get stuck on a sentence. I couldn’t do a transition. Give me 30 versions of this sentence in radically different styles. There’s 200 different citations. I had the A.I. read through the papers that I read through, write notes on them, and organize them for me. I had the A.I. suggest analogies that might be useful. I had the A.I. act as readers, and in different personas, read through the paper from the perspective of, is there some example I could give that’s better? Is this understandable or not?

And that’s very typical of the kind of way that I would, say, bring it to the table. Use it for everything, and you’ll find its limits and abilities.

EZRA KLEIN: Let me ask you one specific question on that, because I’ve been writing a book. And on some bad days of writing the book, I decided to play around with GPT-4. And of the things that it got me thinking about was the kind of mistake or problem these systems can help you see and the kind they can’t. So they can do a lot of, give me 15 versions of this paragraph, 30 versions of this sentence. And every once in a while, you get a good version or you’ll shake something a little bit loose.

ETHAN MOLLICK: I think that’s a wise point. I think there’s two or three things bundled there. The first of those is A.I. is good, but it’s not as good as you. It is, say, at the 80th percentile of writers based on some results, maybe a little bit higher. In some ways, if it was able to have that burst of insight and to tell you this chapter is wrong, and I’ve thought of a new way of phrasing it, we would be at that sort of mythical AGI level of A.I. as smart as the best human. And it just isn’t yet.

EZRA KLEIN: For most people — right, if you’re just going to pick one model, what would you pick? What do you recommend to people? And second, how do you recommend they access it? Because something going on in the A.I. world is there are a lot of wrappers on these models. So ChatGPT has an app. Claude does not have an app. Obviously, Google has its suite of products. And there are organizations that have created a different spin on somebody else’s A.I. — so Perplexity, which is, I believe, built on GPT-4 now, you can pay for it.

ETHAN MOLLICK: It’s a really good question. I recommend working with one of the models as directly as possible, through the company that creates them. And there’s a few reasons for that. One is you get as close to the unadulterated personality as possible. And second, that’s where features tend to roll out first. So if you like sort of intellectual challenge, I think Claude 3 is the most intellectual of the models, as you said.

EZRA KLEIN: So you say it takes about 10 hours to learn a model. Ten hours is a long time, actually. What are you doing in that 10 hours? What are you figuring out? How did you come to that number? Give me some texture on your 10 hour rule.

ETHAN MOLLICK: So first off, I want to indicate the 10 hours is as arbitrary as 10,000 steps. Like, there’s no scientific basis for it. This is an observation. But it also does move you past the, I poked at this for an evening, and it moves you towards using this in a serious way. I don’t know if 10 hours is the real limit, but it seems to be somewhat transformative. The key is to use it in an area where you have expertise, so you can understand what it’s good or bad at, learn the shape of its capabilities.

EZRA KLEIN: Something that feels to me like a theme of your work is that the way to approach this is not learning a tool. It is building a relationship. Is that fair?

ETHAN MOLLICK: A.I. is built like a tool. It’s software. It’s very clear at this point that it’s an emulation of thought. But because of how it’s built, because of how it’s constructed, it is much more like working with a person than working with a tool. And when we talk about it this way, I almost feel kind of bad, because there’s dangers in building a relationship with a system that is purely artificial, and doesn’t think and have emotions. But honestly, that is the way to go forward. And that is sort of a great sin, anthropomorphization, in the A.I. literature, because it can blind you to the fact that this is software with its own sets of foibles and approaches.

And A.I.s do all of these things. And I find that teachers, managers, even parents, editors, are often better at using these systems, because they’re used to treating this as a person. And they interact with it like a person would, giving feedback. And that helps you. And I think the second piece of that “not tool” piece is that when I talk to OpenAI or Anthropic, they don’t have a hidden instruction manual. There is no list of how you should use this as a writer, or as a marketer, or as an educator.

They don’t even know what the capabilities of these systems are. They’re all sort of being discovered together. And that is also not like a tool. It’s more like a person with capabilities that we don’t fully know yet.

EZRA KLEIN: So you’ve done this with all the big models. You’ve done, I think, much more than this, actually, with all the big models. And one thing you describe feeling is that they don’t just have slightly different strengths and weaknesses, but they have different — for lack of a better term, and to anthropomorphize — personalities, and that the 10 hours in part is about developing an intuition not just for how they work, but kind of how they are and how they talk, the sort of entity you’re dealing with.

ETHAN MOLLICK: It’s important to know the personalities not just as personalities, but because there are tricks. Those are tunable approaches that the system makers decide. So it’s weird to have this — in one hand, don’t anthropomorphize, because you’re being manipulated, because you are. But on the other hand, the only useful way is to anthropomorphize. So keep in mind that you are dealing with the choices of the makers.

GPT-4 feels like a workhorse at this point. It is the most neutral of the approaches. It wants to get stuff done for you. And it will happily do that. It doesn’t have a lot of time for chitchat. And then we’ve got Google’s Bard, which feels like — or Gemini now — which feels like it really, really wants to help. We use this for teaching a lot. And we build these scenarios where the A.I. actually acts like a counterparty in a negotiation.

So you get to practice the negotiation by negotiating with the A.I. And it works incredibly well. I’ve been building simulations for 10 years, can’t imagine what a leap this has been. But when we try and get Google to do that, it keeps leaping in on the part of the students, to try and correct them and say, no, you didn’t really want to say this. You wanted to say that. And I’ll play out the scenario as if it went better. And it really wants to kind of make things good for you.

EZRA KLEIN: You were mentioning a minute ago that what the A.I.s do reflect decisions made by their programmers. They reflect guardrails, what they’re going to let the A.I. say. Very famously, Gemini came out and was very woke. You would ask it to show you a picture of soldiers in Nazi Germany, and it would give you a very multicultural group of soldiers, which is not how that army worked. But that was something that they had built in to try to make more inclusive photography generation.

ETHAN MOLLICK: I think that’s a very important point. And fundamental about A.I. is the idea that we technically know how LLMs work, but we don’t know how they work the way they do, or why they’re as good as they are. They’re really — we don’t understand it. The theories range from everyone — from it’s all fooling us, to they’ve emulated the way humans think because the structure of language is the structure of human thought. So even though they don’t think, they can emulate it. We don’t know the answer.

EZRA KLEIN: One thing people know about using these models is that hallucinations, just making stuff up, is a problem. Has that changed at all as we’ve moved from GPT-3.5 to 4, as we move from Claude 2 to 3. Like, has that become significantly better? And if not, how do you evaluate the trustworthiness of what you’re being told?

ETHAN MOLLICK: So those are a couple of overlapping questions. The first of them is, it getting better over time? So there is a paper in the field of medical citations that indicated that around 80 to 90 percent of citations had an error, were made up with GPT-3.5. That’s the free version of Chat. And that drops for GPT-4.

EZRA KLEIN: But doesn’t this make them unreliable in a very tricky way? 80 percent — you’re, like, it’s always hallucinating. 20 percent, 5 percent, it’s enough that you can easily be lulled into overconfidence. And one of the reasons it’s really tough here is you’re combining something that knows how to seem extremely persuasive and confident — you feed into the A.I. a 90-page paper on functions and characteristics of right wing populism in Europe, as I did last night.

ETHAN MOLLICK: Absolutely, and I think hard to grasp, because we’re used to things like type II errors, where we search for something on the internet and don’t find it. We’re not used to type I errors, where we search for something and get an answer back that’s made up. This is a challenge. And there’s a couple things to think about. One of those is — I advocate the BAH standard, best available human. So is the A.I. more or less accurate than the best human you could consult in that area?

EZRA KLEIN: But it also reflects something interesting about the nature of the systems. You have a quote here that I think is very insightful. You wrote, “the core irony of generative A.I.s is that A.I.s were supposed to be all logic and no imagination. Instead, we get A.I.s that make up information, engage in seemingly emotional discussions, and which are intensely creative.” And that last fact is one that makes many people deeply uncomfortable.

ETHAN MOLLICK: I love the phrase “a calculator that uses words.” I think we have been let down by science fiction, both in the utopias and apocalypses that A.I. might bring, but also, even more directly, in our view of how machines should work. People are constantly frustrated, and give the same kinds of tests to A.I.s over and over again, like doing math, which it doesn’t do very well — they’re getting better at this.

EZRA KLEIN: But we were using those measures five years ago, even though they were bad. That’s a point you make that I think is interesting and slightly unsettling.

ETHAN MOLLICK: Yeah, we never had to differentiate humans from machines before. It was always easy. So the idea that we had to have a scale that worked for people and machines, who had that? We had the Turing test, which everyone knew was a terrible idea. But since no machine could pass it, it was completely fine. So the question is, how do we measure this? This is an entirely separate set of issues. Like, we don’t even have a definition of sentience or consciousness.

EZRA KLEIN: So one of the things I will sometimes do, and did quite recently, is give the A.I. a series of personal documents, emails I wrote to people I love that were very descriptive of a particular moment in my life. And then I will ask the A.I. about them, or ask the A.I. to analyze me off of them.

ETHAN MOLLICK: That makes complete sense. I think the weird expectations — we call it the jagged frontier of A.I., that it’s good at some stuff and bad at other stuff. It’s often unexpected. It can lead to these weird moments of disappointment, followed by elation or surprise. And part of the reason why I advocate for people to use it in their jobs is, it isn’t going to outcompete you at whatever you’re best at. I mean, I cannot imagine it’s going to do a better job prepping someone for an interview than you’re doing.

And that’s not me just — I’m trying to be nice to you because you’re interviewing me, but because you’re a good interviewer. You’re a famous interviewer. It’s not going to be as good as that. Now, there’s questions about how good these systems get that we don’t know, but we’re kind of at a weirdly comfortable spot in A.I., which is, maybe it’s the 80th percentile of many performances. But I talk to Hollywood writers. It’s not close to writing like a Hollywood writer. It’s not close to being as good an analyst.

EZRA KLEIN: But this gets to this question of, what are you doing with it? The A.I.s right now seem much stronger as amplifiers and feedback mechanisms and thought partners for you than they do as something you can really outsource your hard work and your thinking to. And that, to me, is one of the differences between trying to spend more time with these systems — like, when you come into them initially, you’re like, OK, here’s a problem, give me an answer.

ETHAN MOLLICK: And that’s why the book’s called “Co-Intelligence.” For right now, we have a prosthesis for thinking. That’s, like, new in the world. We haven’t had that before — I mean, coffee, but aside from that, not much else. And I think that there’s value in that. I think learning to be partner with this, and where it can get wisdom out of you or not — I was talking to a physics professor at Harvard. And he said, all my best ideas now come from talking to the A.I. And I’m like, well, it doesn’t do physics that well. He’s like, no, but it asks good questions. And I think that there is some value in that kind of interactive piece.

EZRA KLEIN: We’ve already talked a bit about — Gemini is helpful, and ChatGPT-4 is neutral, and Claude is a bit warmer. But you urge people to go much further than that. You say to give your A.I. a personality. Tell it who to be. So what do you mean by that, and why?

ETHAN MOLLICK: So this is actually almost more of a technical trick, even though it sounds like a social trick. When you think about what A.I.s have done, they’ve trained on the collective corpus of human knowledge. And they know a lot of things. And they’re also probability machines. So when you ask for an answer, you’re going to get the most probable answer, sort of, with some variation in it. And that answer is going to be very neutral. If you’re using GPT-4, it’ll probably talk about a rich tapestry a lot.

It loves to talk about rich tapestries. If you ask it to code something artistic, it’ll do a fractal. It does very normal, central A.I. things. So part of your job is to get the A.I. to go to parts of this possibility space where the information is more specific to you, more unique, more interesting, more likely to spark something in you yourself. And you do that by giving it context, so it doesn’t just give you an average answer. It gives you something that’s specialized for you.

The easiest way to provide context is a persona. You are blank. You are an expert at interviewing, and you answer in a warm, friendly style. Help me come up with interview questions. It won’t be miraculous in the same way that we were talking about before. If you say you’re Bill Gates, it doesn’t become Bill Gates. But that changes the context of how it answers you. It changes the kinds of probabilities it’s pulling from and results in much more customized and better results.

EZRA KLEIN: OK, but this is weirder, I think, than you’re quite letting on here. So something you turned me on to is there’s research showing that the A.I. is going to perform better on various tasks, and differently on them, depending on the personality. So there’s a study that gives a bunch of different personality prompts to one of the systems, and then tries to get it to answer 50 math questions. And the way it got the best performance was to tell the A.I. it was a Starfleet commander who was charting a course through turbulence to the center of an anomaly.

ETHAN MOLLICK: “What the hell” is a good question. And we’re just scratching the surface, right? There’s a nice study actually showing that if you emotionally manipulate the A.I., you get better math results. So telling it your job depends on it gets you better results. Tipping, especially $20 or $100 — saying, I’m about to tip you if you do well, seems to work pretty well. It performs slightly worse in December than May, and we think it’s because it has internalized the idea of winter break.

EZRA KLEIN: I’m sorry, what?

ETHAN MOLLICK: Well, we don’t know for sure, but —

EZRA KLEIN: I’m holding you up here.

ETHAN MOLLICK: Yeah.

EZRA KLEIN: People have found the A.I. seems to be more accurate in May, and the going theory is that it has read enough of the internet to think that it might possibly be on vacation in December?

ETHAN MOLLICK: So it produces more work with the same prompts, more output, in May than it does in December. I did a little experiment where I would show it pictures of outside. And I’m like, look at how nice it is outside? Let’s get to work. But yes, the going theory is that it has internalized the idea of winter break and therefore is lazier in December.

EZRA KLEIN: I want to just note to people that when ChatGPT came out last year, and we did our first set of episodes on this, the thing I told you was this was going to be a very weird world. What’s frustrating about that is that — I guess I can see the logic of why that might be. Also, it sounds probably completely wrong, but also, I’m certain we will never know. There’s no way to go into the thing and figure that out.

ETHAN MOLLICK: And I think that that is, in some ways, both — as you said, the deep weirdness of these systems. But also, there’s actually downside risks to this. So we know, for example, there is an early paper from Anthropic on sandbagging, that if you ask the A.I. dumber questions, it would get you less accurate answers. And we don’t know the ways in which your grammar or the way you approach the A.I. — we know the amount of spaces you put gets different answers.

EZRA KLEIN: Well, I’m interested in the personas, although I just — I really find this part of the conversation interesting and strange. But I’m interested in the personalities you can give the A.I. for a different reason. I prompted you around this research on how a personality changes the accuracy rate of an A.I. But a lot of the reason to give it a personality, to answer you like it is Starfleet Commander, is because you have to listen to the A.I. You are in relationship with it.

ETHAN MOLLICK: The great power of A.I. is as a kind of companion. It wants to make you happy. It wants to have a conversation. And that can be overt or covert.

EZRA KLEIN: Kevin and I have talked a lot about that conversation with Sydney. And one of the things I always found fascinating about it is, to me, it revealed an incredibly subtle level of read by Sydney Bing, which is, what was really happening there? When you say the A.I. wants to make you happy, it has to read on some level what it is you’re really looking for, over time.

ETHAN MOLLICK: It’s a mirror. I mean, it’s trained on our stuff. And one of the revealing things about that, that I think we should be paying a lot more attention to, is the fact that because it’s so good at this, right now, none of the frontier A.I. models with the possible exception of Inflection’s Pi, which has been basically acquired in large part by Microsoft now, were built to optimize around keeping us in a relationship with the A.I. They just accidentally do that.

There are other A.I. models that aren’t as good that have been focused on this, but that has been something explicit from the frontier models they’ve been avoiding till now. Claude sort of breaches that line a little bit, which is part of why I think it’s engaging. But I worry about the same kind of mechanism that inevitably reined in social media, which is, you can make a system more addictive and interesting. And because it’s such a good cold reader, you could tune A.I. to make you want to talk to it more.

EZRA KLEIN: I want to hold here for a minute, because we’ve been talking about how to use frontier models, I think implicitly talking about how to use A.I. for work. But the way that a lot of people are using it is using these other companies that are explicitly building for relationships. So I’ve had people at one of the big companies tell me that if we wanted to tune our system relationally, if we wanted to tune it to be your friend, your lover, your partner, your therapist, like, we could blow the doors off that. And we’re just not sure it’s ethical.

ETHAN MOLLICK: I think that is an absolute near-term certainty, and sort of an unstoppable one, that we are going to have A.I. relationships in a broader sense.

And I think the question is, just like we’ve just been learning — I mean, we’re doing a lot of social experiments at scale we’ve never done before in the last couple of decades, right? Turns out social media brings out entirely different things in humans that we weren’t expecting. And we’re still writing papers about echo chambers and tribalism and facts, and what we agree or disagree with.

We’re about to have another wave of this. And we have very little research. And you could make a plausible story up, that what’ll happen is it’ll help mental health in a lot of ways for people, and then there’ll be more social outside, that there might be a rejection of this kind of thing.

EZRA KLEIN: I was worried we were getting off track in the conversation, but I realized we were actually getting deeper on the track I was trying to take us down.

ETHAN MOLLICK: I think that’s a temporary state of affairs, like extremely temporary. I think a GPT-4 class model — we actually already know this. They can guess your intent quite well. And I think that this is a way of giving you a sense of agency or control in the short term. I don’t think you’re going to need to know yourself at all. And I think you wouldn’t right now if any of the GPT-4 class models allowed themselves to be used in this way, without guardrails, which they don’t, I think you would already find it’s just going to have a conversation with you and morph into what you want.

EZRA KLEIN: So that’s a little bit chilling, but I’m nevertheless going to stay in this world we’re in, because I think we’re going to be in it for at least a little while longer, where you do have to do all this prompt engineering. What is a prompt, first? And what is prompt engineering?

ETHAN MOLLICK: So a prompt is — technically, it is the sentence, the command you’re putting into the A.I. What it really is is the beginning part of the A.I.s text that it’s processing. And then it’s just going to keep adding more words or tokens to the end of that reply, until it’s done. So a prompt is the command you’re giving the A.I. But in reality, it’s sort of a seed from which the A.I. builds.

EZRA KLEIN: And when you prompt engineer, what are some ways to do that? Maybe one to begin with, because it seems to work really well, is chain of thought.

ETHAN MOLLICK: Just to take a step back, A.I. prompting remains super weird. Again, strange to have a system where the companies making the systems are writing papers as they’re discovering how to use the systems, because nobody knows how to make them work better yet. And we found massive differences in our experiments on prompt types. So for example, we were able to get the A.I. to generate much more diverse ideas by using this chain of thought approach, which we’ll talk about.

EZRA KLEIN: Then you get an answer, and then what?

ETHAN MOLLICK: And then — what you do in a conversational approach is you go back and forth. If you want work output, what you’re going to do is treat it like it is an intern who just turned in some work to you. Actually, could you punch up paragraph two a little bit? I don’t like the example in paragraph one. Could you make it a little more creative, give me a couple of variations? That’s a conversational approach trying to get work done.

EZRA KLEIN: So I want to offer an example of how this back and forth can work. So we asked Claude 3 about prompt engineering, about what we’re talking about here. And the way it described it to us is, quote, “It’s a shift from the traditional paradigm of human-computer interaction, where we input explicit commands and the machine executes them in a straightforward way, to a more open ended, collaborative dialogue, where the human and the A.I. are jointly shaping the creative process,” end quote.

And that’s pretty good, I think. That’s interesting. It’s worth talking about. I like that idea that it’s a more collaborative dialogue. But that’s also boring, right? Even as I was reading it, it’s a mouthful. It’s wordy. So I kind of went back and forth with it a few times. And I was saying, listen, you’re a podcast editor. You’re concise, but also then I gave it a couple examples of how I punched up questions in the document, right? This is where the question began. Here’s where it ended. And then I said, try again, and try again, and try again, and make it shorter. And make it more concise.

ETHAN MOLLICK: So I am at a loss about when you went to Claude and when it was you, to be honest. So I was ready to answer at like two points along the way, so that was pretty good from my perspective, sitting here, talking to you. That felt interesting, and felt like the conversation we’ve been having. And I think there’s a couple of interesting lessons there.

The first, by the way, of — interestingly, you asked A.I. about one of its weakest points, which is about A.I. And everybody does this, but because its knowledge window doesn’t include that much stuff about A.I., it actually is pretty weak in terms of knowing how to do good prompting, or what a prompt is, or what A.I.s do well. But you did a good job with that. And I love that you went back and forth and shaped it.

One of the techniques you used to shape it, by the way, was called few-shot, which is giving an example. So the two most powerful techniques are chain of thought, which we just talked about, and few-shot, giving it examples. Those are both well supported in the literature. And then, I’d add personas. So we’ve talked about, I think, the basics of prompt crafting here overall. And I think that the question was pretty good.

EZRA KLEIN: One of the things I realized trying to spend more time with the A.I. is that you really have to commit to this process. You have to go back and forth with it a lot. If you do, you can get really good questions, like the one I just did — or, I think, really good outcomes. But it does take time.

ETHAN MOLLICK: One set of techniques that work quite well is to speed run to where you are in the conversation. So you can actually pick up an older conversation where you got the A.I.’s mindset where you want and work from there. You can even copy and paste that into a new window. You can ask the A.I. to summarize where you got in that previous conversation, and the tone the A.I. was taking, and then when you give a new instruction say the interaction I like to have with you is this, so have it solve the problem for you by having it summarize the tone that you happen to like at the end.

EZRA KLEIN: You did mention something important there, because they’re already getting much bigger in terms of how much information they can hold. Like, the earlier generations could barely hold a significant chat. Now, Claude 3 can functionally hold a book in its memory. And it’s only going to go way, way, way up from here. And I know I’ve been trying to keep us in the present, but this feels to me really quickly like where this is both going and how it’s going to get a lot better.

ETHAN MOLLICK: It’s not even going there. Like, it’s already there. Gemini 1.5 can hold an entire movie, books. But like, it starts to now open up entirely new ways of working. I can show it a video of me working on my computer, just screen capture. And it knows all the tasks I’m doing and suggests ways to help me out. It starts watching over my shoulder and helping me. I put in all of my work that I did prior to getting tenure and said, write my tenure statement. Use exact quotes.

EZRA KLEIN: One thing that feels important to keep in front of mind here is that we do have some control over that. And not only do we have some control over it, but business models and policy are important here. And one thing we know from inside these A.I. shops is these A.I.s already are, but certainly will be, really super persuasive.

And so if the later iterations of the A.I. companions are tuned on the margin to try to encourage you to be also out in the real world, that’s going to matter, versus whether they have a business model that all they want is for you to spend a maximum amount of time talking to your A.I. companion, whether you ever have a friend who is flesh and blood be damned.

And so that’s an actual choice, right? That’s going to be a programming decision. And I worry about what happens if we leave that all up to the companies, right? At some point, there’s a lot of venture capital money in here right now. At some point, the venture capital runs out. At some point, people need to make big profits. At some point, they’re in competition with other players who need to make profits. And that’s when things — you get into what Cory Doctorow calls the “enshitification” cycle, where things that were once adding a lot of value to the user begin extracting a lot of value to the user.

ETHAN MOLLICK: I absolutely agree. I think that we have agency here. We have agency in how we operate this in businesses, and whether we use this in ways that encourage human flourishing and employees, or are brutal to them. And we have agency over how this works socially. And I think we abrogated that responsibility with social media, and that is an example. Not to be bad news, because I generally have a lot of mixed optimism and pessimism about parts of A.I., but the bad news piece is there are open source models out there that are quite good.

EZRA KLEIN: I see a lot of reasons to be worried about the open source models. And people talk about things like bioweapons and all that. But for some of the harms I’m talking about here, if you want to make money off of American kids, we can regulate you. So sometimes I feel like we almost, like, give up the fight before it begins. But in terms of what a lot of people are going to use, if you want to be having credit card payments processed by a major processor, then you have to follow the rules.

ETHAN MOLLICK: I couldn’t agree more. And I don’t think there’s any reason to give up hope on regulation. I think that we can mitigate. And I think part of our job, though, is also not just to mitigate the harms, but to guide towards the positive viewpoints, right? So what I worry about is that the incentive for profit making will push for A.I. that acts informally as your therapist or your friend, while our worries about experimentation, which are completely valid, are slowing down our ability to do experiments to find out ways to do this right.

And I think it’s really important to have positive examples, too. I want to point to the A.I. systems acting ethically as your friend or companion, and figure out what that is, so there’s a positive model to look for. So I’m not just — this is not to denigrate the role of regulation, which I think is actually going to be important here, and self regulation, and rapid response from government, but also the companion problem of, “we need to make some sort of decisions about what are the paragons of this, what is acceptable as a society?”

EZRA KLEIN: So I want to talk a bit about another downside here, and this one more in the mainstream of our conversation, which is on the human mind, on creativity. So a lot of the work A.I. is good at automating is work that is genuinely annoying, time consuming, laborious, but often plays an important role in the creative process. So I can tell you that writing a first draft is hard, and that work on the draft is where the hard thinking happens.

ETHAN MOLLICK: I think the idea of struggle is actually a core one in many things. I’m an educator. And one thing that keeps coming out in the research is that there is a strong disconnect between what students think they’re learning and when they learn. So there was a great controlled experiment at Harvard in intro science classes, where students either went to a pretty entertaining set of lectures, or else they were forced to do active learning, where they actually did the work in class.

Not just because, by the way, it makes the work easier, but also because you mentally anchor on the A.I.’s answer. And in some ways, the most dangerous A.I. application, in my mind, is the fact that you have these easy co-pilots in Word and Google Docs, because any writer knows about the tyranny of the blank page, about staring at a blank page and not knowing what to do next, and the struggle of filling that up. And when you have a button that produces really good words for you, on demand, you’re just going to do that. And it’s going to anchor your writing.

We can teach people about the value of productive struggle, but I think that during the school years, we have to teach people the value of writing — not just assign an essay and assume that the essay does something magical, but be very intentional about the writing process and how we teach people about how to do that, because I do think the temptation of what I call “the button” is going to be there otherwise, for everybody.

EZRA KLEIN: But I worry this stretches, I mean, way beyond writing. So the other place I worry about this, or one of the other places I worry about this a lot, is summarizing. And I mean, this goes way back. When I was in school, you could buy Sparknotes. And they were these little, like, pamphlet sized descriptions of what’s going on in “War and Peace” or what’s going on in “East of Eden.”

ETHAN MOLLICK: So I don’t mean to push back too much on this.

EZRA KLEIN: No, please, push back a lot.

ETHAN MOLLICK: But I think you’re right.

EZRA KLEIN: Imagine we’re debating and you are a snarky. A.I. [LAUGHS]

ETHAN MOLLICK: Fair enough. With that prompt —

EZRA KLEIN: With that prompt engineering.

ETHAN MOLLICK: — yeah, I mean, I think that this is the eternal thing about looking back on the next generation, we worry about technology ruining them. I think this makes ruining easier. But as somebody who teaches at universities, like, lots of people are summarizing. Like, I think those of us who enjoy intellectual struggle are always thinking everybody else is going through the same intellectual struggle when they do work. And they’re doing it about their own thing. They may or may not care the same way.

EZRA KLEIN: Well, let me take this from another angle. One of the things that I’m a little obsessed with is the way the internet did not increase either domestic or global productivity for any real length of time. So I mean, it’s a very famous line. You can see the IT revolution anywhere but in the productivity statistics. And then you do get, in the ’90s, a bump in productivity that then peters out in the 2000s.

And so some of the time that was given to us back was also taken back. And I see a lot of dynamics like this that could play out with A.I. — I wouldn’t even just say if we’re not careful, I just think they will play out and already are. I mean, the internet is already filling with mediocre crap generated by A.I. There is going to be a lot of destructive potential, right? You are going to have your sex bot in your pocket, right?

There’s a million things — and not just that, but inside organizations, there’s going to be people padding out what would have been something small, trying to make it look more impressive by using the A.I. to make something bigger. And then, you’re going to use the A.I. to summarize it back down. The A.I. researcher, Jonathan Frankel, described this to me as, like, the boring apocalypse version of A.I., where you’re just endlessly inflating and then summarizing, and then inflating and then summarizing the volume of content between different A.I.

ETHAN MOLLICK: So I think there’s a lot there to chew on. And I also have spent a lot of time trying to think about why the internet didn’t work as well. I was an early Wikipedia administrator.

EZRA KLEIN: Thank you for your service.

ETHAN MOLLICK: [LAUGHS] Yeah, it was very scarring. But I think a lot about this. And I think A.I. is different. I don’t know if it’s different in a positive way. And I think we talked about some of the negative ways it might be different. And I think it’s going to be many things at once, happening quite quickly. So I think the information environment’s going to be filled up with crap. We will not be able to tell the difference between true and false anymore. It will be an accelerant on all the kinds of problems that we have there.

The thing that makes A.I. possibly great is that it’s so very human, so it interacts with our human systems in a way that the internet did not. We built human systems on top of it, but A.I. is very human. It deals with human forms and human issues and our human bureaucracy very well. And that gives me some hope that even though there’s going to be lots of downsides, that the upsides of productivity and things like that are real.

Part of the problem with the internet is we had to digitize everything. We had to build systems that would make our offline world work with our online world. And we’re still doing that. If you go to business schools, digitizing is still a big deal 30 years on from early internet access. A.I. makes this happen much quicker because it works with us. So I’m a little more hopeful than you are about that, but I also think that the downside risks are truly real and hard to anticipate.

EZRA KLEIN: I think that is a good place to end. So always our final question, what are three books you’d recommend to the audience?

ETHAN MOLLICK: OK, so the books I’ve been thinking about are not all fun, but I think they’re all interesting. One of them is “The Rise and Fall of American Growth,” which is — it’s two things. It’s an argument about why we will never have the kind of growth that we did in the first part of the Industrial Revolution again, but I think that’s less interesting than the first half of the book, which is literally how the world changed between 1870 or 1890 and 1940, versus 1940 and 1990, or 2000.

EZRA KLEIN: Ethan Mollick, your book is called “Co-Intelligence.” Your Substack is One Useful Thing. Thank you very much.

ETHAN MOLLICK: Thank you.

EZRA KLEIN: This episode of “The Ezra Klein Show” was produced by Kristin Lin. Fact checking by Michelle Harris. Our senior engineer is Jeff Geld with additional mixing from Efim Shapiro. Our senior editor is Claire Gordon. The show’s production team also includes Annie Galvin and Rollin Hu. Original music by Isaac Jones. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser, and special thanks to Sonia Herrero.

Advertisement

Home — Essay Samples — Sociology — Sociological Imagination — Example of Sociological Imagination

test_template

Example of Sociological Imagination

  • Categories: Sociological Imagination Unemployment

About this sample

close

Words: 651 |

Published: Mar 20, 2024

Words: 651 | Page: 1 | 4 min read

Table of contents

Personal troubles vs. public issues, linking personal troubles to public issues, the impact of unemployment on individuals and society.

Image of Dr. Oliver Johnson

Cite this Essay

Let us write you an essay from scratch

  • 450+ experts on 30 subjects ready to help
  • Custom essay delivered in as few as 3 hours

Get high-quality help

author

Verified writer

  • Expert in: Sociology Economics

writer

+ 120 experts online

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy . We’ll occasionally send you promo and account related email

No need to pay just yet!

Related Essays

2 pages / 1096 words

3 pages / 1439 words

3 pages / 1177 words

3 pages / 1372 words

Remember! This is just a sample.

You can get your custom paper by one of our expert writers.

121 writers online

Still can’t find what you need?

Browse our vast selection of original essay samples, each expertly formatted and styled

Related Essays on Sociological Imagination

In conclusion, sociological imagination provides a powerful framework for understanding society by connecting personal experiences with larger social forces. By recognizing the distinction between personal troubles and public [...]

The concept of the sociological imagination, as formulated by C. Wright Mills, has been a cornerstone of sociological theory and practice since its in 1959. In his seminal work, Mills argued that individuals must look beyond [...]

The concept of sociological imagination, as introduced by sociologist C. Wright Mills in 1959, is a critical tool that allows individuals to understand the intersection of their personal lives and the broader social and [...]

p>One of the key aspects of sociological imagination is its ability to help individuals understand social issues beyond their immediate personal experiences. By using their sociological imagination, individuals can see how their [...]

The first thing to note whilst reading ‘The Sociological Imagination’ (first published in 1959) is that when C. Write Mills refers to “man”/ “men” he is in fact referring to the entire population rather than specifically the [...]

The sociological imagination is a powerful tool that enables individuals to see the connection between personal troubles and public issues. It allows us to understand how society operates, how it shapes our lives, and how we can [...]

Related Topics

By clicking “Send”, you agree to our Terms of service and Privacy statement . We will occasionally send you account related emails.

Where do you want us to send this sample?

By clicking “Continue”, you agree to our terms of service and privacy policy.

Be careful. This essay is not unique

This essay was donated by a student and is likely to have been used and submitted before

Download this Sample

Free samples may contain mistakes and not unique parts

Sorry, we could not paraphrase this essay. Our professional writers can rewrite it and get you a unique paper.

Please check your inbox.

We can write you a custom essay that will follow your exact instructions and meet the deadlines. Let's fix your grades together!

Get Your Personalized Essay in 3 Hours or Less!

We use cookies to personalyze your web-site experience. By continuing we’ll assume you board with our cookie policy .

  • Instructions Followed To The Letter
  • Deadlines Met At Every Stage
  • Unique And Plagiarism Free

sociological imagination essay question

IMAGES

  1. What is Sociological Imagination? Free Essay Example

    sociological imagination essay question

  2. Sociological Imagination Essay

    sociological imagination essay question

  3. ⇉Sociological imagination

    sociological imagination essay question

  4. ⇉Sociological Perspective or Imagination Essay Example

    sociological imagination essay question

  5. Sociological Imagination Essay

    sociological imagination essay question

  6. Sociological Imagination And Personal Explanation Essay Example

    sociological imagination essay question

VIDEO

  1. Sociological Imagination #UPSC #IAS #CSE #IPS

  2. Video Sociological Imagination Paradox 1

  3. Sociological Imagination Video and Reflection- Evan Payne

  4. Answer Writing in Sociology

  5. Sociological Imagination

  6. Sociological Imagination Review for Social Problems class

COMMENTS

  1. The Sociological Imagination Essay Questions

    The Sociological Imagination is C. Wright Mills's 1959 statement about what social science should be and the good it can produce. In this way, it is a polemical book. It has a vision for sociology, and it criticizes those with a different vision.... Asked by Daud K #1232415. Answered by Aslan 2 years ago 5/1/2022 10:25 AM.

  2. Sociological Imagination Essay

    A sociological imagination "is the ability to see the relationship between individual experiences and the larger society.". Sociological imagination helps us think how we experience as our personal problems. Some of these personal problems include homelessness, domestic violence, addiction, unemployment, obesity, etc.

  3. 85 Sociological Imagination Essay Topic Ideas & Examples

    Sociological Imagination as a Tool for Engaged Citizenship. The goal of this essay is to place engaged citizenship in the context of Mills's sociological imagination that involves being able to link one's personal experiences to processes taking place in wider society. Sociological Imagination of Homosexuality.

  4. What Is Sociological Imagination? How Can You Use It?

    The sociological imagination is a method of thinking about the world. As you may have guessed, it's part of the field of sociology, which studies human society. When you put "sociological"—studying society—and "imagination"—the concept of forming new ideas, often creatively—together, you get a pretty good definition of the ...

  5. The Sociological Imagination

    The sociological imagination, a concept established by C. Wright Mills (1916-1962) provides a framework for understanding our social world that far surpasses any common sense notion we might derive from our limited social experiences. Mills was a contemporary sociologist who brought tremendous insight into the daily lives of society's members.

  6. What Is Sociological Imagination: Definition & Examples

    Summary. The term sociological imagination describes the type of insight offered by sociology; connecting the problems of individuals to that of broader society. C. Wright Mills, the originator of the term, contended that both sociologists and non-academics can develop a deep understanding of how the events of their own lives (their biography ...

  7. What Is Sociological Imagination: [Essay Example], 639 words

    It allows us to question taken-for-granted assumptions, challenge dominant narratives, and critically analyze social phenomena. ... Example of Sociological Imagination Essay. Sociological imagination is a concept put forward by the sociologist C. Wright Mills in 1959. It refers to the ability to see the intersection between personal troubles ...

  8. The Sociological Imagination Essay Topics

    Thanks for exploring this SuperSummary Study Guide of "The Sociological Imagination" by C. Wright Mills. A modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.

  9. Sociological Imagination: [Essay Example], 1371 words

    The "Sociological Imagination" term was coined by an American sociologist named C. Wright Mills in 1959. Mills described the "sociological imagination" as "the vivid awareness of the relationships between personal experience and the wider society.". The goal of using sociological imagination is to obtain the ability to analyze ...

  10. Sociological Imagination Essays: Examples, Topics, & Outlines

    PAGES 4 WORDS 1102. Sociology. The sociological imagination refers to the ability to see the world as a sociologist would: that is, by viewing individuals and relationships in terms of social structures, institutions, values, and norms. Usually, the sociological imagination addresses squarely the concepts of race, class, gender, and social power.

  11. Sociological Imagination: The Main Advantages Essay

    Sociological imagination involves connecting personal experiences to the society and investigating how they relate. According to Mills, sociological imagination enables individuals to see the context of what influences their personal decisions and those of others. Additionally, sociological imagination allows people to enhance their identity ...

  12. Sociological Imagination: Sociology Issues Essay

    Sociological imagination is a way to see the events of one's own life in a broader context of social issues and trends. The term was coined by C. Wright Mills, who argued that "neither the life of an individual nor the history of a society can be understood without understanding both" (Smith-Hawkins, 2020, p. 8).

  13. Essay on Sociological Imagination

    Papers provided by EduBirdie writers usually outdo students' samples. Sociological imagination is the context that shapes the decision-making of an individual person and others. This can be transformative as it shows the effects of individuals' decisions on society due to the problems they have faced. Both Mills (1959) and Plummer (2012) talk ...

  14. Sociological Imagination: Definition, Importance, and Applications

    As a college student, understanding the concept of sociological imagination is crucial for analyzing the complexities of the social world. The ability to see beyond individual experiences and recognize the broader social forces at play is essential for developing a holistic understanding of society. This essay aims to explore the definition of sociological imagination, its importance in ...

  15. Sociological Imagination Essay

    Sociological Imagination. broader level, they are connected because they both attend the same school. In the society, we can affiliate ourselves to certain groups, views, or historical events. "Sociological Imagination" is the ability that allows us to connect our personal experience to the "historical forces" (Conley, pg. 4).

  16. Sociology 101 Essay

    THE SOCIOLOGICAL IMAGINATION "Sociological imagination" is the awareness of the relationship between personal experience and the wider society,[CW Mills-2000]. This essay will be divided into two sections, the first of which will cover the "common sense" of sociological imagination, and the second of which will cover "personal issues".

  17. Applying the Sociological Imagination Essay

    Applying the Sociological Imagination Essay. Emma Fisher Galen College of Nursing Sociology 1305 Professor Lee. Applying the Sociological Imagination Essay Introduction C. Wright Mills, the pioneering sociologist, defined sociological imagination as an awareness of a relationship between a person's behavior and experience that influences the person's choices and perceptions (OpenStax, 2016).

  18. Sociological Imagination Essay

    Sociological Imagination Essay: The sociological imagination is the capability to shift from one perspective to another. To have a sociological imagination, an individual must be capable of pulling away from the state of affairs and supposing from a choice factor of view. ... FAQ's on Sociological Imagination Essay. Question 1. How does the ...

  19. Sociological Imagination and The Promise

    The concept of sociological imagination, as introduced by sociologist C. Wright Mills in 1959, is a critical tool that allows individuals to understand... read full [Essay Sample] for free ... sociological imagination encourages individuals to question the taken-for-granted assumptions and norms that shape their lives. ... and The Promise ...

  20. CBSE Class 12 Sociology Question Paper 2024, All SETs Download PDF

    To know about the overall difficulty level of the question paper, types of questions asked in the exam, and a lot more details, check the CBSE Class 12 Sociology Exam Analysis 2024 here. Also Check:

  21. CBSE Class 12 Sociology Answer Key 2024 and Question Papers ...

    CBSE Class 12 Sociology Answer Key 2024: CBSE Class 12 Sociology Board Exam 2024 has been successfully conducted today on April 1, 2024 from 10:30 AM to 1:30 PM. The aspirants might be in a ...

  22. Critical Reflection on The Sociological Imagination by C. W. Mill

    This essay briefly explains the first chapter of C. W. Mills' book, The Sociological Imagination - The Promise. Different perspectives and opinions I have on this chapter and certain topics in this chapter are also included.

  23. Transcript: Ezra Klein Interviews Ethan Mollick

    So I've been on a personal quest to get better at this. And in that quest, I have a guide. Ethan Mollick is a professor at the Wharton School of the University of Pennsylvania.

  24. Example of Sociological Imagination: [Essay Example], 651 words

    Sociological imagination is a concept put forward by the sociologist C. Wright Mills in 1959. It refers to the ability to see the intersection between personal troubles and public issues, and to understand how these two are connected. At its core, sociological imagination allows individuals to critically examine their lives and the world around ...