Student Writing in the Digital Age

Essays filled with “LOL” and emojis? College student writing today actually is longer and contains no more errors than it did in 1917.

student using laptop

“Kids these days” laments are nothing new, but the substance of the lament changes. Lately, it has become fashionable to worry that “kids these days” will be unable to write complex, lengthy essays. After all, the logic goes, social media and text messaging reward short, abbreviated expression. Student writing will be similarly staccato, rushed, or even—horror of horrors—filled with LOL abbreviations and emojis.

JSTOR Daily Membership Ad

In fact, the opposite seems to be the case. Students in first-year composition classes are, on average, writing longer essays (from an average of 162 words in 1917, to 422 words in 1986, to 1,038 words in 2006), using more complex rhetorical techniques, and making no more errors than those committed by freshman in 1917. That’s according to a longitudinal study of student writing by Andrea A. Lunsford and Karen J. Lunsford, “ Mistakes Are a Fact of Life: A National Comparative Study. ”

In 2006, two rhetoric and composition professors, Lunsford and Lunsford, decided, in reaction to government studies worrying that students’ literacy levels were declining, to crunch the numbers and determine if students were making more errors in the digital age.

They began by replicating previous studies of American college student errors. There were four similar studies over the past century. In 1917, a professor analyzed the errors in 198 college student papers; in 1930, researchers completed similar studies of 170 and 20,000 papers, respectively. In 1986, Robert Connors and Andrea Lunsford (of the 2006 study) decided to see if contemporary students were making more or fewer errors than those earlier studies showed, and analyzed 3,000 student papers from 1984. The 2006 study (published in 2008) follows the process of these earlier studies and was based on 877 papers (one of the most interesting sections of “Mistakes Are a Fact of Life” discusses how new IRB regulations forced researchers to work with far fewer papers than they had before.

Remarkably, the number of errors students made in their papers stayed consistent over the past 100 years. Students in 2006 committed roughly the same number of errors as students did in 1917. The average has stayed at about 2 errors per 100 words.

What has changed are the kinds of errors students make. The four 20th-century studies show that, when it came to making mistakes, spelling tripped up students the most. Spelling was by far the most common error in 1986 and 1917, “the most frequent student mistake by some 300 percent.” Going down the list of “top 10 errors,” the patterns shifted: Capitalization was the second most frequent error 1917; in 1986, that spot went to “no comma after introductory element.”

In 2006, spelling lost its prominence, dropping down the list of errors to number five.  Spell-check and similar word-processing tools are the undeniable cause. But spell-check creates new errors, too: The new number-one error in student writing is now “wrong word.” Spell-check, as most of us know, sometimes corrects spelling to a different word than intended; if the writing is not later proof-read, this computer-created error goes unnoticed. The second most common error in 2006 was “incomplete or missing documentation,” a result, the authors theorize, of a shift in college assignments toward research papers and away from personal essays.

Additionally, capitalization errors have increased, perhaps, as Lunsford and Lunsford note, because of neologisms like eBay and iPod. But students have also become much better at punctuation and apostrophes, which were the third and fifth most common errors in 1917. These had dropped off the top 10 list by 2006.

The study found no evidence for claims that kids are increasingly using “text speak” or emojis in their papers. Lunsford and Lunsford did not find a single such instance of this digital-era error. Ironically, they did find such text speak and emoticons in teachers’ comments to students. (Teachers these days?)

The most startling discovery Lunsford and Lunsford made had nothing to do with errors or emojis. They found that college students are writing much more and submitting much longer papers than ever. The average college essay in 2006 was more than double the length of the average 1986 paper, which was itself much longer than the average length of papers written earlier in the century. In 1917, student papers averaged 162 words; in 1930, the average was 231 words. By 1986, the average grew to 422 words. And just 20 years later, in 2006, it jumped to 1,038 words.

Why are 21st-century college students writing so much more? Computers allow students to write faster. (Other advances in writing technology may explain the upticks between 1917, 1930, and 1986. Ballpoint pens and manual and electric typewriters allowed students to write faster than inkwells or fountain pens.) The internet helps, too: Research shows that computers connected to the internet lead K-12 students to “conduct more background research for their writing; they write, revise, and publish more; they get more feedback on their writing; they write in a wider variety of genres and formats; and they produce higher quality writing.”

The digital revolution has been largely text-based. Over the course of an average day, Americans in 2006 wrote more than they did in 1986 (and in 2015 they wrote more than in 2006). New forms of written communication—texting, social media, and email—are often used instead of spoken ones—phone calls, meetings, and face-to-face discussions. With each text and Facebook update, students become more familiar with and adept at written expression. Today’s students have more experience with writing, and they practice it more than any group of college students in history.

Get Our Newsletter

Get your fix of JSTOR Daily’s best stories in your inbox each Thursday.

Privacy Policy   Contact Us You may unsubscribe at any time by clicking on the provided link on any marketing message.

In shifting from texting to writing their English papers, college students must become adept at code-switching, using one form of writing for certain purposes (gossiping with friends) and another for others (summarizing plots). As Kristen Hawley Turner writes in “ Flipping the Switch: Code-Switching from Text Speak to Standard English ,” students do know how to shift from informal to formal discourse, changing their writing as occasions demand. Just as we might speak differently to a supervisor than to a child, so too do students know that they should probably not use “conversely” in a text to a friend or “LOL” in their Shakespeare paper. “As digital natives who have had access to computer technology all of their lives, they often demonstrate in theses arenas proficiencies that the adults in their lives lack,” Turner writes. Instructors should “teach them to negotiate the technology-driven discourse within the confines of school language.”

Responses to Lunsford and Lunsford’s study focused on what the results revealed about mistakes in writing: Error is often in the eye of the beholder . Teachers mark some errors and neglect to mention (or find) others. And, as a pioneering scholar of this field wrote in the 1970s, context is key when analyzing error: Students who make mistakes are not “indifferent…or incapable” but “beginners and must, like all beginners, learn by making mistakes.”

College students are making mistakes, of course, and they have much to learn about writing. But they are not making more mistakes than did their parents, grandparents, and great-grandparents. Since they now use writing to communicate with friends and family, they are more comfortable expressing themselves in words. Plus, most have access to technology that allows them to write faster than ever. If Lunsford and Lunsford’s findings about the average length of student papers stays true, today’s college students will graduate with more pages of completed prose to their name than any other generation.

If we want to worry about college student writing, then perhaps what we should attend to is not clipped, abbreviated writing, but overly verbose, rambling writing. It might be that editing skills—deciding what not to say, and what to delete—may be what most ails the kids these days.

JSTOR logo

JSTOR is a digital library for scholars, researchers, and students. JSTOR Daily readers can access the original research behind our articles for free on JSTOR.

More Stories

Cricket in the United States, 1920

Endangered: North American Cricket

Manuscript Illumination with Singing Monks in an Initial D, from a Psalter

Monastic Chant: Praising God Out Loud

A chivalrous gentleman helps his lady friend onto the towpath from a punt at Richmond, London, 1925. (Photo by Hulton Archive/Getty Images)

The Complex History of American Dating

Cropped raised hand of male student with friends and teacher in classroom

9 Ways to Create an “Intellectually Humble” Classroom

Recent posts.

  • How Jazz Albums Visualized a Changing America
  • Dr. AI Will See You Now
  • Women Warriors Make Great Propaganda
  • Brain Mapping, Blindness, and a Mystery in a Cave
  • Nella Larsen’s Lessons in Library School

Support JSTOR Daily

Sign up for our weekly newsletter.

More From Forbes

How the digital age is reinventing (almost) everything.

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

Young Indian woman using mobile phone at the bar

In some ways, the digital age can be seen as beginning with the release of Apple’s iPhone in 2007. It is hard now to recall that someone as wise as Clayton Christensen said at the time of its release that the iPhone “wasn’t truly disruptive.” It was “a product that the existing players in the industry are heavily motivated to beat,” and that “its probability of success is going to be limited.”

Five years later, in 2012, Christensen was still saying that the iPhone would soon succumb to price competition and modular knockoffs. “History,” Christensen said, “speaks pretty loudly on that.”

Nine years after that, in 2021, and trillions of dollars more in profits, the iPhone is still going strong. What Christensen missed was that Apple had created not just an industrial-era product, but a digital platform, in which developers kept coming up with fresh innovations—known as apps—that customers would love so much they would hesitate to quit for anything less. Apple’s competitors weren’t just competing against a firm or product: they were competing against Apple and its army of app developers.

Whether Apple took an unseemly share of the platform’s profits at the expense of its developers is currently being litigated . But the game is long over for competitors. Almost all of Apple’s competitors, except Samsung, were obliterated.

What Christensen missed was that the iPhone wasn’t just a phone. It was a multi-function device that devastated a whole array of products and services , including address books, video cameras, pagers, wristwatches, maps, books, travel games, flashlights, dictation recorders, music players, timers, alarm clocks, answering machines, yellow pages, wallets, keys, phrase books, transistor radios, personal digital assistants, dashboard navigation systems, remote controls, airline ticket counters, newspapers and magazines, directory assistance, travel and insurance agents, restaurant guides, pocket calculators, and more.

Best Travel Insurance Companies

Best covid-19 travel insurance plans.

Christensen’s “loud lessons” from the history of industrial-era competition, in which one firm’s products compete against another’s, don’t apply to firms that are innovating in accordance with principles of the digital age. In the digital age, competition doesn’t work in an orderly fashion with phone companies competing against other phone companies. It’s not just that the rules of the game have changed. An entirely new game is being played. In this new game, innovation can transform almost any product and disrupt the dynamic of industrial-era competition.

A Car Is No Longer Just A Car

In the digital age, a car can shift from being a static mechanical transportation device to a dynamic entertainment and business center on wheels that continues to evolve, long after the original purchase, as software updates are streamed to the user over many years.

Thus, Tesla is now setting up a platform that could be as seductive as Apple’s iPhone. This is why the market capitalization of Tesla is almost as much as the other car companies combined— firms which sell many millions more cars than Tesla. Tesla is not just making a car. The stock market believes that Tesla has a good chance creating a continuously innovating platform that will be difficult to compete against. Accordingly, it values Tesla astronomically.

Figure 1: Auto industry: millions of cars sold vs market capitalization

Note that it isn’t just new technology that creates this new kind of product. Technology is part of it. But it also requires a different kind of management. It needs an obsession with delivering fresh value to customers, in which software engineers are at least as important as mechanical engineers—a fundamental shift in the corporate pecking order of the traditional car company. Both sets of engineers have to collaborate on the common goal of creating value for customers.

A Restaurant Is Not Just A Restaurant

Similar anomalies are emerging in the restaurant sector. The world’s leading pizza firm—Domino’s—is a winner in industrial-era terms, with almost-2,000% growth in market capitalization over 10 years. Over several decades, Domino’s has defeated its long-time rival, Papa John’s , to become the dominant pizza shop, world-wide, with some 270,000 establishments.

But long term, Domino’s real long-term competitor may be a newcomer like DoorDash , which has just three establishments. DoorDash doesn’t prepare food. It delivers food from many restaurants. By end 2020, its platform served 450,000 merchants, 20 million consumers, and more 1 million deliverers. It already has a market capitalization three times that of Domino’s, as shown in Figure 2. Domino’s chance of being one of the long-term winners in the digital era will depend on besting firms like DoorDash. That in turn will depend as much on data as it does in making pizzas.

Figure 2: Restaurants Domino's DoorDash

Ironically, Domino's Pizza is one of the few firms to implement a successful digital transformation , with exemplary customer interaction. Domino’s problem is the limited array of products it offers, compared to the multiple restaurants that DoorDash is offering. The risk is that, over time as with Amazon, customers may gravitate to a single source for all of their restaurant delivery needs.

Insurance Is Not Just Insurance

Similarly, in the industrial era, an insurance policy was just an insurance policy. In the digital age, an insurance policy can become something very different. With car insurance for instance, in firms like the international insurer, Progressive Corporation , an insurance policy is not just a financial contract. It can become an interactive relationship in which the car owner collaborates with the firm to reduce risk and lower costs.

Thus, drivers can save money on their car insurance by sharing their driving habits with Progressive. Progressive is then able to lower rates for people who drive less, in safer ways, or during safer times of day. Customers can also check their driving data, make changes to their driving habits and obtain bigger discounts. The result is a win-win: good driving behavior is rewarded, the firm has a more attractive business, and society has safer roads.

How Digital Reinvents Everything

What we are seeing is that firms operating in the principles of digital age can take (almost) anything, that is slow, expensive, disagreeable, impersonal, or difficult to scale, and turn it into something that is quick, easy, personal, agreeable, cheap or free, and easy to scale.

It is not just the technology that makes it happen, although technology is a large part of it. It is also the very different management practices.

The industrial era was built on the technology of steam, oil, and electricity and the management practices of hierarchical bureaucracy.

Figure 3: Industrial era

The digital era has emerged through combination of new technology, particularly computers and the Internet, and different kind of management. Instead of hierarchical bureaucracy focused on internal efficiency and outputs, the digital era came of age once firms figured out the principles of business agility, with an obsessional focus on customer value, and doing work in teams as part of a network of competence.

Figure 4: How The Digital Age was born

Since the 2000s, the digital age has continued to evolve with both new kinds of technology (including the cloud, artificial intelligence, blockchain, and algorithmic decisions) and new kinds of management (including new business models, platforms, ecosystems, managing data as an asset.)

Figure 5: How the digital age has evolved

The result is an economic era, in which every business is being, or will be, reinvented.

And read also:

Why Mainstream Economists Miss Digital Innovation

Why Most Digital Transformations Are Failing

Steve Denning

  • Editorial Standards
  • Reprints & Permissions

Logo for OPEN OKSTATE

7 Digital literacies and the skills of the digital age

Cathy L. Green, Oklahoma State University

Oklahoma State University

Abstract – This chapter is intended to provide a framework and understanding of digital literacy, what it is and why it is important. The following pages explore the roots of digital literacy, its relationship to language literacy and its role in 21 st century life.

Introduction

Unlike previous generations, learning in the digital age is marked by the use of rapidly evolving technology, a deluge of information and a highly networked global community (Dede, 2010). In such a dynamic environment, learners need skills beyond the basic cognitive ability to consume and process language. In other words: To understand what the characteristics of the digital age, and of digital learners, means for how people learn in this new and changing landscape, one may turn to the evolving discussion of literacy or, as one might say now, of digital literacy. The history of literacy contextualizes digital literacy and illustrates changes in literacy over time. By looking at literacy as a historical phenomenon, the characteristics of which have evolved over time, we can glean the fundamental characteristics of the digital age. Those characteristics in turn illuminate the skills needed in order to take advantage of digital environments. The following discussion is an overview of digital literacy, its essential components and why it is important for learning in a digital age.

Moving from Literacy to Digital Literacy

Literacy refers to the ability of people to read and write (UNESCO, 2017). Reading and writing then, is about encoding and decoding information between written symbols and sound (Resnick, 1983; Tyner, 1998). More specifically, literacy is the ability to understand the relationship between sounds and written words such that one may read, say and understand them (UNESCO, 2004; Vlieghe, 2015). Literacy is often considered a skill or competency and is often referred to as such. Children and adults alike can spend years developing the appropriate skills for encoding and decoding information.

Over the course of thousands of years, literacy has become much more common and widespread with a global literacy rate ranging from 81% to 90% depending on age and gender (UNESCO, 2016). From a time when literacy was the domain of an elite few, it has grown to include huge swaths of the global population. There are a number of reasons for this, not the least of which are some of the advantages the written word can provide. Kaestle, (1985) tells us that “literacy makes it possible to preserve information as a snapshot in time, allows for recording, tracking and remembering information, and sharing information more easily across distances among others” (p. 16). In short, literacy led “to the replacement of myth by history and the replacement of magic by skepticism and science. Writing allowed bureaucracy, accounting, and legal systems with universal rules and has replaced face-to-face governance with depersonalized administration” (Kaestle, 1985, p. 16). This is not to place a value judgement on the characteristics of literacy but rather to explain some of the many reasons why it spread.

There are, however, other reasons for the spread of literacy. In England, throughout the middle ages literacy grew in part, because people who acquired literacy skills were able to parlay those skills into work with more pay and social advantages (Clanchy, 1983). The great revolutions of the 19th and 20th centuries also relied on leaders who could write and compatriots who could read as a way to spread new ideas beyond the street corners and public gatherings of Paris, Berlin, and Vienna. Literacy was perceived as necessary for spreading information to large numbers of people. In the 1970’s Paulo Freire insisted that literacy was vital for people to participate in their own governance and civic life (Tyner, 1998). His classic “Pedagogy of the Oppressed” begins from the premise that bringing the traditional illiterate and uneducated into learning situations as partners with their teachers awakens the critical conscience necessary as a foundation for action to foment change (Freire, 1973). UNESCO (2004) also acknowledges the role that literacy plays in enabling populations to effect change and achieve social justice aims. They speak even more broadly, moving beyond the conditions necessary for revolution, contending that literacy is a fundamental right of every human being, providing employment opportunities, and the fundamental skills necessary to accrue greater wealth and improve one’s quality of life.

Although the benefits of literacy were a driving force in its spread, technological advances also enabled the spread of literacy to greater and greater numbers of people. From stamped tokens, tally sticks and clay tablets, to ancient scrolls, handwritten volumes, the printing press, typewriters, and finally computers, technology is largely responsible for driving the evolution of literacy into the particular forms of encoding and decoding information associated with the digital age. Technology has made it possible for literacy to move from the hands of the few to the hands of the masses and to morph into a digital environment with characteristics extending far beyond anything that has been seen before.

Not only did computers and electronic technology deliver literacy into the hands of many but also created an environment that made it possible to store vast amounts of information. Books and libraries led the way to making information easily available to the public, but within the age of computers and the internet the volume of accessible information is larger than ever, more readily available than ever, and changing more quickly than ever before. In the early 21st century, technology continues to develop more quickly than at any time in the past creating an environment that is constantly changing. These changes contribute to the need for different skills beyond traditional literacy skills also called new media literacy (Jenkins, 2018). For a short video on the reasons why digital literacy is important visit “ The New Media Literacies ” located on YouTube.com and created by the research team at Project New Media Literacies.

Literacy in the Digital Age

If literacy involves the skills of reading and writing, digital literacy requires the ability to extend those skills in order to effectively take advantage of the digital world (ALA, 2013). More general definitions express digital literacy as the ability to read and understand information from digital sources as well as to create information in various digital formats (Bawden, 2008; Gilster, 1997; Tyner, 1998; UNESCO, 2004). Developing digital skills allows digital learners to manage a vast array of rapidly changing information and is key to both learning and working in an evolving digital landscape (Dede, 2010; Koltay, 2011; Mohammadyari & Singh, 2015). As such, it is important for people to develop certain competencies specifically for handling digital content.

People who adapt well to the digital world exhibit characteristics enabling them to develop and maintain digital literacy skills. Lifelong learning is a key characteristic necessary for handling rapid changes in technology and information and thus, critical to digital literacy. Successful digital learners have a high level of self-motivation, a desire for active modes of learning and they exercise the ability to learn how to learn. Maintaining and learning new technical skills also benefits learners in the digital age and an attitude of exploration and play will help learners stay engaged and energized in a world where speed of change and volume of information could otherwise become overwhelming (Dede, 2010; Jenkins, 2018; Visser, 2012). A final characteristic of a digital learner includes the ability to engage in a global network with a greater awareness of one’s place and audience in that network. Together, these characteristics of the digital age guide us in understanding what traits a learner will require to be successful in the digital environment. The following section will help understand what lies at the intersection of digital skills and traits of successful digital learners by reviewing existing digital literacy frameworks.

Reviewing Existing Frameworks for Digital Literacy/ies

Digital literacy is alternately described as complicated, confusing, too broad to be meaningful and always changing (Heitin, 2016; Pangrazio, 2014; Tyner, 1998; Williams, 2006). Due to this confusion, some feel it best to completely avoid the term digital literacy altogether and instead opt for the terms such as digital competencies (Buckingham, 2006), 21st century skills (Williamson, 2011) or digital skills (Heitin, 2016). Another way to sort out the confusion is to look at digital literacy as multiple literacies (Buckingham, 2006; Lankshear & Knobel, 2008; UNESCO, 2004)

Here, I take the latter approach and look at digital literacy as a collection of literacies each of which play a significant role in learning in a digital world. Ng (2012), operationalizes digital literacy as a framework of multiple, specific competencies which, when combined, form a cohesive collection of skills. By taking this approach, we link the characteristics of the digital environment as well as those of the digital learner not to a single digital skill but rather a set of digital literacy practices. In this way, we can consider the various skills needed to navigate the digital world in an organized and consistent manner.

Ng (2012) proposes a three-part schema for discussing the overlapping functional characteristics of a digitally competent person: technical, cognitive, and social (see Figure 1).

the digital age essay

Technical literacy, also referred to as operational literacy, refers to the mastery of technical skills and tasks required to access and work with digital technology such as how to operate a computer; use a mouse and keyboard; open software; cut, copy and paste data and files, acquire an internet connection and so on (Lankshear & Knobel, 2008). The cognitive area of digital literacy focuses on activities such as critical thinking, problem solving and decision making (Williamson, 2011) and includes the ability to “evaluate and apply new knowledge gained from digital environments”(Jones-Kavalier & Flannigan, 2006, p. 5). The third of Ng’s three categories – social literacies – covers a wide range of activities which together constitute the ability to communicate in a digital environment both socially and professionally, understand cyber security, follow “netiquette” protocols, and navigate discussions with care so as not to misrepresent or create misunderstandings (Ng, 2012). Of particular note, Ng captures the essence of digital literacy by showing how digital literacy exists at the intersection of the technical, cognitive and social aspects of literacy which are referred to as dimensions. Ng’s framework is not, however, a digital literacy framework itself. Instead it provides a vehicle for exploring the various components of digital literacy at a conceptual level while remaining clear that the individual skills are at all times connected to and dependent upon each other.

There are a number of organizations that publish their own framework for digital literacies including the International Society for Technology in Education ICT Skills (ISTE), the American Association of College and Universities (AACU), the Organization for Economic Cooperation and Development (OECD), the American Library Association (ALA), and the Partnership for 21st Century Skills among others (Dede, 2010). The digital frameworks exhibit many similarities, and a few differences. There are some differences in the terminology and organization of these frameworks, but they all include similar skills. What follows is a brief overview of the different digital frameworks. See Figure 2 for a composite of these frameworks.

the digital age essay

Figure 2. Major Frameworks for 21st Century Skills (American Library Association, 2013; Dede, 2010; SCONUL, 2016; Vockley & Lang, 2008)

Each of the frameworks come from a slightly different angle and will at times reflect the background from which they come. The American Library Association (ALA) framework evolved out of the information literacy tradition of libraries, while the American Association of College and Universities (AACU) and the Society of College and University Libraries (SOCNUL) evolved from higher education perspective, the Partnership for 21st century learning addresses K-12 education, and the ISTE is steeped in a more technical tradition. Even with these different areas of focus the components of each framework are strikingly similar although some in more detail than others. Three of the six specifically address the skills necessary for accessing, searching and finding information in a digital environment while the other three have broader categories in which one might expect to find these skills including, research and information fluency, intellectual skills, and ICT literacy. Cognitive skills required for digital literacy are also covered by all of the frameworks in varying degrees of specificity. Among them one will find references to evaluating, understanding, creating, integrating, synthesizing, creativity and innovation. Finally, four of the six digital frameworks pay homage to the necessity of solid communication skills. They are in turn, referred to as life skills, personal and social responsibility, communication, collaboration, digital citizenship and collective intelligence.

What seems oddly missing from this list of skills is the technical component which only appears explicitly in the ISTE list of skills. The partnership for 21st century learning uses ICT literacy as a designation for the ability to use technology and the ALA, in discussing its framework, makes it clear that technical proficiency is a foundational requirement for digital literacy skills. Even with these references to technical skills the digital literacy frameworks are overwhelmingly partial to the cognitive and social focus of digital skills and technical proficiency tends to be glossed over compared to the other dimensions. Even though technical skills receive relatively little attention by comparison we will assume for this discussion, technical skills are a prerequisite to the other digital skills, and we will look more carefully at each of them in the next section.

To fully understand the many digital literacies, we will use the ALA framework as a point of reference for further discussion using the other frameworks and other materials to further elucidate each skill area. The ALA framework is laid out in terms of basic functions with enough specificity to make it easy to understand and remember but broad enough to cover a wide range of skills. The ALA framework includes the following areas:

  • Understanding,
  • Evaluating,
  • Creating, and
  • Communicating (American Library Association, 2013).

Finding information in a digital environment represents a significant departure from the way human beings have searched for information for centuries. The learner must abandon older linear or sequential approaches to finding information such as reading a book, using a card catalog, index or table of contents and instead use lateral approaches like natural language searches, hypermedia text, keywords, search engines, online databases and so on (Dede, 2010; Eshet, 2002). The shift from sequential to lateral involves developing the ability to construct meaningful search parameters (SCONUL, 2016) whereas before, finding the information would have meant simply looking up page numbers based on an index or sorting through a card catalog. Although finding information may depend to some degree on the search tool being used (library, internet search engine, online database, etc.) the search results also depend on how well a person is able to generate appropriate keywords and construct useful Boolean searches. Failure in these two areas could easily return too many results to be helpful, vague or generic results, or potentially no useful results at all (Hangen, 2015).

Not immediately obvious, but part of the challenge of finding information is the ability to manage the results. Because there is so much data, changing so quickly, in so many different formats it can be challenging to organize and store it in such a way as to be useful. SCONUL (2016) talks about this as the ability to organize, store, manage and cite digital resources while the Educational Testing Service also specifically mentions the skills to access and manage information. Some ways to accomplish these tasks is through the use of social bookmarking tools such as Diigo, clipping and organizing software such as Evernote and OneNote, and bibliographic software. Many sites, such as YouTube allow individuals with an account to bookmark videos as well as create channels or collections of videos for specific topics or uses. Other websites have similar features.

Understanding

Understanding in the context of digital literacy perhaps most closely resembles traditional literacy in so much as it too, is the ability to read and interpret text (Jones-Kavalier & Flannigan, 2006). In the digital age, however, the ability to read and understand extends much further than text alone. For example, searches may return results with any combination of text, video, sound, and audio as well as still and moving pictures. As the internet has evolved, there have evolved a whole host of visual languages such as moving images, emoticons, icons, data visualizations, videos and combinations of all of the above. Lankshear & Knoble, (2008) refer to these modes of communication as “post typographic textual practice”. Understanding the variety of modes of digital material may also be referred to as multimedia literacy (Jones-Kavalier & Flannigan, 2006), visual literacy (Tyner, 1998), and digital literacy (Buckingham, 2006).

Evaluating digital media requires competencies ranging from evaluating the importance of a piece of information to determine its accuracy and its source. Evaluating information is not new to the digital age, but the nature of digital information can make it more difficult to understand who the source of information is and whether it can be trusted (Jenkins, 2018). When there is abundant and rapidly changing data across heavily populated networks, anyone with access can generate information online, making decisions about its authenticity, trustworthiness, relevance, and significance daunting. Learning evaluative digital skills means learning to ask questions about who is writing the information, why they are writing it, and who the intended audience is (Buckingham, 2006). Developing critical thinking skills is part of the literacy of evaluating and assessing the suitability for the use of a specific piece of information (SCONUL, 2016).

Looking for secondary sources of information can help confirm the authenticity and accuracy of online data and researching the credentials and affiliations of the author is another way to find out more about whether an article is trustworthy or valid. One may find other places the author has been published and verify they are legitimate. Sometimes one may be able to review affiliated organizations to attest to the expertise of the author such finding out where an employee works if they are a member of a professional organization or a leading researcher in a given field. All of these provide essential clues for use in evaluating information online.

Creating in the digital world makes explicit the production of knowledge and ideas in digital formats. While writing is a critical component of traditional literacy, it is not the only creative tool in the digital toolbox. Other tools are available and include creative activities such as podcasting, making audio-visual presentations, building data visualizations, 3D printing, writing blogs and new tools that haven’t even been thought of yet. In short, all formats in which digital information may be consumed, a digitally literate individual will also want to be able to use in the creation of a product. A key component of creating with digital tools is understanding what constitutes fair use and what is considered plagiarism. While this is not new to the digital age, it may be more challenging to find the line between copying and extending someone else’s work.

In part, the reason for the increased difficulty of finding the line between plagiarism and new work is the “cut and paste culture” of the internet referred to as “reproduction literacy” (Eshet 2002, p.4) also referred to as appropriation in Jenkins’ New Media Literacies (Jenkins, 2018). The question is, what can one change and how much can one change work without being considered copying? This skill requires the ability to think critically, evaluate a work and make appropriate decisions. There are tools and information to help understand and find those answers such as the creative commons. Learning about these resources and learning how to use them is part of this digital literacy.

Communicating

Communicating is the final category of digital skills in the ALA digital framework. The capacity to connect with individuals all over the world creates unique opportunities for learning and sharing information for which developing digital communication skills is vital. Some of the skills required for communicating in a digital environment include digital citizenship, collaboration, and cultural awareness. This is not to say that one does not need to develop communication skills outside of the digital environment but that the skills required for digital communication go beyond what is required in a non-digital environment. Most of us are adept at personal, face to face communication but digital communication needs the ability to engage in asynchronous environments such as email, online forums, blogs and social media platforms where what we say can’t always be deleted but can be easily misinterpreted. Add that to an environment where people number in the millions and the opportunities for misunderstandings and cultural miscues are much more likely.

The communication category of digital literacies covers an extensive array of skills above and beyond what one might need for face to face interactions. It includes competencies around ethical and moral behavior, responsible communication for engagement in social and civic activities (Adam Becker et al., 2017), an awareness of audience and an ability to evaluate the potential impact of one’s actions online. It also includes skills for handling privacy and security in online environments. These activities fall into two main categories of activity including digital citizenship and collaboration.

Digital citizenship refers to one’s ability to interact effectively in the digital world. Part of this skill is good manners, often referred to as “netiquette. There is a level of context which is often missing in digital communication due to physical distance, lack of personal familiarity with people online and the sheer volume of people who may come in contact with our words. People who know us well may understand exactly what we mean when we say something sarcastic or ironic, but those and other vocal and facial cues are missing in most digital communication making it more likely we will be misunderstood. Also, we are also more likely to misunderstand or be misunderstood if we remain unaware of cultural differences amongst people online. So, digital citizenship includes an awareness of who we are, what we intend to say and how it might be perceived by other people we do not know (Buckingham, 2006). It is also a process of learning to communicate clearly and in ways that help others understand what we mean.

Another key digital skill is collaboration, and it is essential for effective participation in digital projects via the internet. The internet allows people to engage with others we may never see in person and work towards common goals be they social, civic or business oriented. Creating a community and working together requires a degree of trust and familiarity that can be difficult to build given the physical distance between participants. Greater awareness must be paid to inclusive behavior, and more explicit efforts need to be made to make up for perceived or actual distance and disconnectedness. So, while the promise of digital technology to connect people is impressive it is not necessarily an automatic transition, and it requires new skills.

Parting thoughts.

It is clear from our previous discussion of digital literacy that technology and technical skills underpin every other digital skill. A failure to understand hardware, software, the nature of the internet, cloud-based technologies and an inability to learn new concepts and tools going forward handicaps one’s ability to engage with the cognitive and social literacies. While there are sometimes tacit references to technical skills and ability, extant digital literacy frameworks tend to focus more on the cognitive and social aspects of digital environments. There is an implied sense that once technical skills are learned, we the digitally literate person can forget about them and move on to the other skills. Given the rapid pace of technological change in the last 40 years, however, anyone working in a digital environment would be well advised to keep in mind that technical concepts and tools continue to develop. It does not seem likely that we will ever reach a point where people can simply take technological skills for granted and to do so would undermine our ability to address the other digital skills.

Another way to think of this is to recognize that no matter what the skill, none of them operate independently of one another. Whether searching, creating, evaluating, understanding or communicating, it is a combination of skills (or literacies) that allow us to accomplish our goals. Thinking critically, and evaluating information and sources leads to sound decision-making. Understanding and synthesizing information is necessary for creating and again the technical tools are necessary for completing the product. Finding information is of little use if one is unable to analyze its usefulness and creating a great video or podcast will not mean much if one is unable to navigate social and professional networks to communicate those works to others. If only understood in isolation, digital literacies have little meaning and can be of little use in approaching digital environments.

Ng’s (2012) conceptual framework reminds us that digital literacy is that space where technical, cognitive and social literacies overlap. A digital skill is not the same thing as digital literacy but the two are fully intwined. Acquiring digital skills is only the beginning of a study of digital literacies, however, and it would be a mistake to stop here. Furthermore, digital literacies span multiple areas including both the cognitive and the social. The real value of digital literacy lies in understanding the synergistic effect of individual digital literacy skills integrated with sets of competencies that enable one to work effectively in the digital world.

Learning Activities.

Literacy Narratives are stories about reading and composing in any form or context. They often include poignant memories that involve a personal experience with literacy. Digital literacy narratives can sometimes be categorized as narratives that focus on how the writer came to understand the importance of technology in his/her life or teaching pedagogy. More often, they are simply narratives that use a medium beyond the print-based essay to tell the story.

Kairos: A Journal of Rhetoric, Technology, and Pedagogy, 20(1), available at http://kairos.technorhetoric.net/20.1/praxis/bourelle-et-al

  • Combining both aspects of the genre, write a piece based on your technological literacy, choosing a medium you feel best conveys the message you want to share with your audience.
  • Find and read 2-4 literacy narratives online that emphasize the use of technology and write a short reflection that discusses the main digital literacies used, summarizes the main points made and describes the elements you felt were most important. Also, describe any digital literacy skills you utilized to complete the assignment.
  • Create your literacy narrative that tells the story of a significant experience of your own with digital literacy. Use a multi-modal tool that includes audio and images or video. Share with your classmates and discuss the most important ideas you noticed in others’ narratives.
  • Compare two of the literacy frameworks in Figure 2. How are they alike? How are they different? Do you like one better than the other? Why or Why not?
  • Digital Literacy and why it matters – https://www.youtube.com/watch?v=p2k3C-iB88w
  • The essential elements of digital literacies https://www.youtube.com/watch?v=A8yQPoTcZ78
  • What is a Literacy Narrative? https://www.youtube.com/watch?v=_Mhl2j-cpZo
  • Benji Bissman’s computer literacy narrative – http://daln.osu.edu/handle/2374.DALN/2327 [site can’t be reached, KE 6.12.24]  
  • Global Digital Literacy Council [page not found, KE 6.12.24]
  • International Society for Technology in Education
  • Information and Communication Technologies [site can’t be reached, KE 6.12.24]
  • Education Development Center, Inc.
  • International Visual Literacy Association
  • http://mediasmarts.ca/digital-media-literacy-fundamentals/digital-literacy-fundamentals
  • https://www.microsoft.com/en-us/digitalliteracy/overview.aspx [page not found, KE 6.12.24]
  • . http://info.learning.com/hubfs/Corp_Site/Sales%20Tools/12EssentialSkills_Brochure_Apr16.pdf [page not found, KE 6.12.24]
  • http://www. digitalliteracy.us
  • https://k12.thoughtfullearning.com/FAQ/what-are-literacy-skills

References.

Adam Becker, S., Cummins, M., Davis, A., Freeman, A., Hall Gieseinger, C., & Ananthanarayanan, V. (2017). NMC Horizon Report: 2017 Higher Education Edition. NMC Horizon Report. https://doi.org/ISBN 978-0-9977215-7-7

Association, A. L. (2013). Digital literacy, libraries, and public policy (January). Washington, D.C. Retrieved from http://www.districtdispatch.org/wp-content/uploads/2013/01/2012_OITP_digilitreport_1_22_13.pdf

Bawden, D. (2008). Origins and concepts of digital literacy. In C. Lankshear & M. Knobel (Eds.) (pp. 17–32).

Buckingham, D. (2006). Defining digital literacy. District Dispatch, 263–276. https://doi.org/10.1007/978-3-531-92133-4_4

Clanchy, M. (1983). Looking back from the invention of printing. Resnick (Ed.), Literacy in historical perspective (pp. 7–22). Library of Congress.

Dede, C. (2010). Comparing frameworks for 21st century skills. 21st Century Skills: Rethinking How Students Learn, 51–76.

Eshet, Y. (2002). Digital literacy: A new terminology framework and its application to the design of meaningful technology-based learning environments. Association for the Advancement of Computing in Education, 1–7.

Gilster, P. (1997). Digital Literacy. New York: Wiley Computer Pub.

Hangen, T. (2015). Historical digital literacy, one classroom at a time. Journal of American History. https://doi.org/10.1093/jahist/jav062

Heitin, L. (2016). Digital Literacy: Forging agreement on a definition. Retrieved from www.edweek.org/go/changing-literacy

Jenkins, H. (2018). This page has a content security policy that prevents it from being loaded in this way . Retrieved from http://www.newmedialiteracies.org/

Jones-Kavalier, B. B. R., & Flannigan, S. L. (2006). Connecting the digital dots : Literacy of the 21st century. Workforce, 29(2), 8–10. https://doi.org/Article

Kaestle, C. F. (1985). Review of Research in Education. The History of Literacy and the History of Readers, 12(1985), 11–53. Retrieved from http://www.jstor.org/stable/1167145

Koltay, T. (2011). The media and the literacies: Media literacy, information literacy, digital literacy. Media, Culture & Society, 33(2), 211–221. https://doi.org/10.1177/0163443710393382

Lankshear, Colin & Knobel, M. (2008). Introduction. In C. & K. M. Lankshear (Ed.), Digital Literacies: Concepts, policies and practices. https://doi.org/9781433101694

Mohammadyari, S., & Singh, H. (2015). Understanding the effect of e-learning on individual performance: The role of digital literacy. Computers and Education, 82, 11–25. https://doi.org/10.1016/j.compedu.2014.10.025

Ng, W. (2012). Can we teach digital natives digital literacy? Computers and Education, 59(3), 1065–1078. https://doi.org/10.1016/j.compedu.2012.04.016

Pangrazio, L. (2014). Reconceptualising critical digital literacy. Discourse: Studies in the Cultural Politics of Education, 37(2), 163–174. https://doi.org/10.1080/01596306.2014.942836

Reynolds, R. (2016). Defining, designing for, and measuring social constructivist digital literacy development in learners: a proposed framework. Educational Technology Research and Development. https://doi.org/10.1007/s11423-015-9423-4

SCONUL. (2016). The SCONUL7 pillars of information literacy through a digital literacy “ lens .” Retrieved from https://www.sconul.ac.uk/sites/default/files/documents/Digital_Lens.pdf

Tyner, K. (1998). Tyner, Kathleen. Literacy in a digital world: Teaching and learning in the age of information (Kindle). Routledge.

UNESCO. (2004) The plurality of literacy. UNESCO, (The plurality of literacy and its Implications for Policies and Programmes UNESCO Education Sector Position Paper). https://doi.org/10.1017/CBO9781107415324.004

Visser, M. (2012). Digital literacy definition. Retrieved from http://connect.ala.org/node/181197

Vlieghe, J. (2015). Traditional and digital literacy. The literacy hypothesis, technologies of reading and writing, and the “grammatized” body. Ethics and Education. https://doi.org/10.1080/17449642.2015.1039288

Vockley, M., & Lang, V. (2008). 21st century skills , education & competitiveness. Retrieved from http://www.p21.org/storage/documents/21st_century_skills_education_and_competitiveness_guide.pdf

Williams, B. T. (2006). Girl power in a digital world: Considering the complexity of gender, literacy, and technology. Journal of Adolescent & Adult Literacy, 50(4), 300–307. https://doi.org/10.1598/JAAL.50.4.6

Williamson, R. (2011). Digital literacy. EPI Education Partnerships, inc. Retrieved from http://www.iste.org/standards/aspx

This resource is no cost at https://open.library.okstate.edu/learninginthedigitalage/

Links checked 6.12.24 KE

Learning in the Digital Age Copyright © 2020 by Cathy L. Green, Oklahoma State University is licensed under a Creative Commons Attribution 4.0 International License , except where otherwise noted.

Share This Book

Communication – Communicating in the Digital Age Essay

  • To find inspiration for your paper and overcome writer’s block
  • As a source of information (ensure proper referencing)
  • As a template for you assignment

Communicating in the Digital Age is an article by Roshong (2019) dedicated to the problem of adaptation of communication to modern technologies. The author points out the dramatic changes in work and life that the digital revolution has incurred. However, people do not yet realize how distracting the world of endless notifications and interactions is. Not only is work productivity hampered, but a person also has to spend time restoring focus and concentration. The article aims to show that the ability to work in a hustling and dynamic environment is an essential skill in the Digital Age. The intended audience is business managers and employees who are responsible for the organization of corporate communication. The author’s thesis is that it is imperative to ascertain effective communication means to help companies and employees achieve the intended results.

The article does not follow the standard study format, and it does not detail the employed research methods and chosen sample. However, the author offers practical implications, the first of which is understanding modern paradigm shifts in communication. Most importantly, user-centric communication has taken over one-way messages intended for large audiences. In practice, users decide what sort of information they receive and share and what platforms they use. Second, digitization has allowed alternative ways of data visualization to thrive. Whereas text and images comprised most of the information, now multiple multimedia options exist. Third, information flow has become continuous and transpires on a global scale, whereas previously, all information was periodic and specific to a particular region. All these tendencies signify the need for businesses to adapt and use digital capabilities to their advantage.

The second step is to prioritize means of communication that are both qualitative and fast to consume. For instance, a concise infographic is more effective than a text paragraph. The reason for it is that such data visualization conveys why the message is important and the content of the message itself. Meanwhile, reading a text paragraph takes more time and is less likely to keep the audience engaged. This does not imply that traditional means of communication are now obsolete. For example, phone calls and physical meetings can still be utilized. However, they should also be adjusted to the modern pace of communication.

The third step is to ensure that data is user-centric. Modern technology allows services to be customized to meet customers’ personal preferences. An especially important part of user-centric communication is feedback, which allows for making customer service more personalized. Meanwhile, the information itself should be dense, engaging, and easy to consume. There are numerous modern solutions that add agility and convenience to the exchange of data. Digital workspaces and data-sharing applications are tools that increase communication’s versatility. However, the more communication with customers is done, the more transparent the company that services these interactions has to be. Users have to be sure that their personal data is not compromised and that data privacy is protected.

The ultimate point of the article is that leaders cannot change the nature of modern communications, but they can adjust their leadership styles to the dynamic informational environment. Just as the abundance of digital noise can distract employees from working efficiently, proper use of means of data exchange can improve the quality of interactions with colleagues and customers. In order to properly adapt, it is essential to adopt a new communication style that is user-centric, respectful of personal preferences, and transparent at the same time. Combining these qualities with modern information technologies will produce effective and engaging communication, which will satisfy both customers and colleagues.

Roshong, M. (2019). Communicating in the Digital Age . Strategic Finance. Web.

  • Public Speaking as the Art of Communication
  • Plagiarism and Originality in Personal Understanding
  • Data Visualization with the Use of the R Language
  • Target Centric Approach of Clark’s Predictive Analysis
  • Design Role in Information Visualization
  • Professional Relationships and Communication Qualities
  • Analysis of Four Types of Listening
  • Effective Human Relations Communications Style Self-Assessment
  • Therapeutic Communication Importance
  • The Importance of Ability to Listen in Communication
  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2023, June 21). Communication – Communicating in the Digital Age. https://ivypanda.com/essays/communication-communicating-in-the-digital-age/

"Communication – Communicating in the Digital Age." IvyPanda , 21 June 2023, ivypanda.com/essays/communication-communicating-in-the-digital-age/.

IvyPanda . (2023) 'Communication – Communicating in the Digital Age'. 21 June.

IvyPanda . 2023. "Communication – Communicating in the Digital Age." June 21, 2023. https://ivypanda.com/essays/communication-communicating-in-the-digital-age/.

1. IvyPanda . "Communication – Communicating in the Digital Age." June 21, 2023. https://ivypanda.com/essays/communication-communicating-in-the-digital-age/.

Bibliography

IvyPanda . "Communication – Communicating in the Digital Age." June 21, 2023. https://ivypanda.com/essays/communication-communicating-in-the-digital-age/.

The Digital Age Essays

Bridging the advertising gap: the digital age, google’s advantage, and future trajectories, popular essay topics.

  • American Dream
  • Artificial Intelligence
  • Black Lives Matter
  • Bullying Essay
  • Career Goals Essay
  • Causes of the Civil War
  • Child Abusing
  • Civil Rights Movement
  • Community Service
  • Cultural Identity
  • Cyber Bullying
  • Death Penalty
  • Depression Essay
  • Domestic Violence
  • Freedom of Speech
  • Global Warming
  • Gun Control
  • Human Trafficking
  • I Believe Essay
  • Immigration
  • Importance of Education
  • Israel and Palestine Conflict
  • Leadership Essay
  • Legalizing Marijuanas
  • Mental Health
  • National Honor Society
  • Police Brutality
  • Pollution Essay
  • Racism Essay
  • Romeo and Juliet
  • Same Sex Marriages
  • Social Media
  • The Great Gatsby
  • The Yellow Wallpaper
  • Time Management
  • To Kill a Mockingbird
  • Violent Video Games
  • What Makes You Unique
  • Why I Want to Be a Nurse
  • Send us an e-mail

August 22, 2024

TAS.Logo.New.Sum22

published by phi beta kappa

Print or web publication, reading in a digital age.

Notes on why the novel and the Internet are opposites, and why the latter both undermines the former and makes it more necessary

the digital age essay

The nature of transition, how change works its way through a system, how people acclimate to the new—all these questions. So much of the change is driven by technologies that are elusive if not altogether invisible in their operation. Signals, data, networks. New habits and reflexes. Watch older people as they try to retool; watch the ease with which kids who have nothing to unlearn go swimming forward. Study their movements, their aptitudes, their weaknesses. I wonder if any population in history has had a bigger gulf between its youngest and oldest members.

I ask my students about their reading habits, and though I’m not surprised to find that few read newspapers or print magazines, many check in with online news sources, aggregate sites, incessantly. They are seldom away from their screens for long, but that’s true of us, their parents, as well.

But how do we start to measure effects—of this and everything else? The outer look of things stays much the same, which is to say that the outer look of things has not caught up with the often intangible transformations. Newspapers are still sold and delivered; bookstores still pile their sale tables high. It is easy for the critic to be accused of alarmism. And yet …

Information comes to seem like an environment. If anything “important” happens anywhere, we will be informed. The effect of this is to pull the world in close. Nothing penetrates, or punctures. The real, which used to be defined by sensory immediacy, is redefined.

From the vantage point of hindsight, that which came before so often looks quaint, at least with respect to technology. Indeed, we have a hard time imagining that the users weren’t at some level aware of the absurdity of what they were doing. Movies bring this recognition to us fondly; they give us the evidence. The switchboard operators crisscrossing the wires into the right slots; Dad settling into his luxury automobile, all fins and chrome; Junior ringing the bell on his bike as he heads off on his paper route. The marvel is that all of them—all of us—concealed their embarrassment so well. The attitude of the present to the past … well, it depends on who is looking. The older you are, the more likely it is that your regard will be benign—indulgent, even nostalgic. Youth, by contrast, quickly gets derisive, preening itself on knowing better, oblivious to the fact that its toys will be found no less preposterous by the next wave of the young.

These notions came at me the other night while I was watching the opening scenes of Wim Wenders’s 1987 film Wings of Desire , which has as its premise the active presence of angels in our midst. The scene that triggered me was set in a vast and spacious modern library. The camera swooped with angelic freedom, up the wide staircases, panning vertically to a kind of balcony outcrop where Bruno Ganz, one of Wenders’s angels, stood looking down. Below him people moved like insects, studying shelves, removing books, negotiating this great archive of items.

Maybe it was the idea of angels that did it—the insertion of the timeless perspective into this moment of modern-day Berlin. I don’t know, but in a flash I felt myself looking back in time from a distant and disengaged vantage. I was seeing it all as through the eyes of the future, and what I felt, before I could check myself, was a bemused pity: the gaze of a now on a then that does not yet know it is a then, which is unselfconsciously fulfilling itself.

Suddenly it’s impossible to imagine a world in which many interactions formerly dependent on print on paper happen screen to screen. It’s no stretch, no exercise in futurism. You can pretty much extrapolate from the habits and behaviors of kids in their teens and 20s, who navigate their lives with little or no recourse to paper. In class they sit with their laptops open on the table in front of them. I pretend they are taking course-related notes, but would not be surprised to find out they are writing to friends, working on papers for other courses, or just trolling their favorite sites while they listen. Whenever there is a question about anything—a date, a publication, the meaning of a word—they give me the answer before I’ve finished my sentence. From where they stand, Wenders’s library users already have a sepia coloration. I know that I present book information to them with a slight defensiveness; I wrap my pronouncements in a preemptive irony. I could not bear to be earnest about the things that matter to me and find them received with that tolerant bemusement I spoke of, that leeway we extend to the beliefs and passions of our elders.

AOL slogan: “We search the way you think.”

I just finished reading an article in Harper’s by Gary Greenberg (“A Mind of Its Own”) on the latest books on neuropsychology, the gist of which recognizes an emerging consensus in the field, and maybe, more frighteningly, in the culture at large: that there may not be such a thing as mind apart from brain function. As Eric Kandel, one of the writers discussed, puts it: “Mind is a set of operations carried out by the brain, much as walking is a set of operations carried out by the legs, except dramatically more complex.” It’s easy to let the terms and comparisons slide abstractly past, to miss the full weight of implication. But Greenberg is enough of an old humanist to recognize when the great supporting trunk of his worldview is being crosscut just below where he is standing and to realize that everything he deems sacred is under threat. His recognition may not be so different from the one that underlay the emergence of Nietzsche’s thought. But if Nietzsche found a place of rescue in man himself, his Superman transcending himself to occupy the void left by the loss—the murder—of God, there is no comparable default now.

Brain functioning cannot stand in for mind, once mind has been unmasked as that, unless we somehow grant that the nature of brain partakes of what we had allowed might be the nature of mind. Which seems logically impossible, as the nature of mind allowed possibilities of connection and fulfillment beyond the strictly material, and the nature of brain is strictly material. It means that what we had imagined to be the something more of experience is created in-house by that three-pound bundle of neurons, and that it is not pointing to a larger definition of reality so much as to a capacity for narrative projection engendered by infinitely complex chemical reactions. No chance of a wizard behind the curtain. The wizard is us, our chemicals mingling.

“And if you still think God made us,” writes Greenberg, “there’s a neuro­chemical reason for that too.” He quotes writer David Linden, author of The Accidental Mind: How Brain Evolution Has Given Us Love, Memory, Dreams, and God (!): “Our brains have become particularly adapted to creating coherent, gap-free stories … . This propensity for narrative creation is part of what predisposes us humans to religious thought.” Of course one can, must, ask whence narration itself. What in us requires story rather than the chaotic pullulation that might more accurately describe what is?

Greenberg also cites philosopher Karl Popper, his belief that the neuroscientific worldview will gradually displace what he calls the “mentalist” perspective:

With the progress of brain research, the language of the physiologists is likely to penetrate more and more into ordinary language, and to change our picture of the universe, including that of common sense. So we shall be talking less and less about experiences, perceptions, thoughts, beliefs, purposes and aims; and more and more about brain processes … . When this stage has been reached, mentalism will be stone dead, and the problem of mind and its relation to the body will have solved itself.

But it is not only developments in brain science that are creating this deep shift in the human outlook. This research advances hand in hand with the wholesale implementation and steady expansion of the externalized neural network: the digitizing of almost every sphere of human activity. Long past being a mere arriving technology, the digital is at this point ensconced as a paradigm, fully saturating our ordinary language. Who can doubt that even when we are not thinking, when we are merely functioning in our new world, we are premising that world very differently than did our parents or the many generations preceding them?

What is the place of the former world now, its still-familiar but also strangely sepia-tinged assumptions about the self acting in a larger and, in frightening and thrilling ways, inexplicable world?

Let me go back to that assertion by Linden: “Our brains have become particularly adapted to creating coherent, gap-free stories … . This propensity for narrative creation is part of what predisposes us humans to religious thought.” What a topic for surmising! I would almost go so far as to say that it is a mystery as great as the original creation—the what, how, and whither—the contemplation of how chemicals in combination create things we call narratives, and how these narratives elicit the extraordinary responses they do from chemicals in combination. The idea of “narrative creation” carries a great deal in its train. For narrative—story—is not the same thing as simple sequentiality. To say “I went here and then here and then did this and then did that” is not narrative, at least not in the sense that I’m sure Linden intends. No, narration is sequence that claims significance. Animals, for example, do not narrate, even though they are well aware of sequence and of the consequences of actions. “My master has picked up my bowl and has gone with it into that room; he will return with my food.” This is a chain of events linked by a causal expectation, but it stops there. Human narratives are events and descriptions selected and arranged for meaning.

The question, as always, is one of origins. Did man invent narrative or, owing to whatever predispositions in his makeup, inherit it? Is coming into human consciousness also a coming into narrative—is it part of the nature of human consciousness to seek and create narrative, which is to say meaning? What would it mean then that chemicals in combination created meaning, or the idea of meaning, or the tools with which meaning is sought—created that by which their own structure and operation was theorized and questioned? If that were true, then “mere matter” would have to be defined as having as one of its possibilities that of regarding itself.

We assume that logical thought, syllogistic analytical reason, is the necessary, right thought—and we do so because this same thought leads us to think this way. No exit, it seems. Except that logical thought will allow that there may be other logics, though it cannot explicate them. Another quote from the Harper’s article, this from Greenberg: “As a neuroscientist will no doubt someday discover, metaphor is something that the brain does when complexity renders it incapable of thinking straight.”

Metaphor, the poet, imagination. The whole deeper part of the subject comes into view. What is, for me, behind this sputtering, is my longstanding conviction that imagination—not just the faculty, but what might be called the whole party of the imagination —is endangered, is shrinking faster than Balzac’s wild ass’s skin, which diminished every time its owner made a wish. Imagination, the one feature that connects us with the deeper sources and possibilities of being, thins out every time another digital prosthesis appears and puts another layer of sheathing between ourselves and the essential givens of our existence, making it just that much harder for us to grasp ourselves as part of an ancient continuum. Each time we get another false inkling of agency, another taste of pseudopower.

Reading the Atlantic cover story by Nicholas Carr on the effect of Google (and online behavior in general), I find myself especially fixated on the idea that contemplative thought is endangered. This starts me wondering about the difference between contemplative and analytic thought. The former is intransitive and experiential in its nature, is for itself; the latter is transitive, is goal directed. According to the logic of transitive thought, information is a means, its increments mainly building blocks toward some synthesis or explanation. In that thought-world it’s clearly desirable to have a powerful machine that can gather and sort material in order to isolate the needed facts. But in the other, the contemplative thought-world—where reflection is itself the end, a means of testing and refining the relation to the world, a way of pursuing connection toward more affectively satisfying kinds of illumination, or insight —information is nothing without its contexts. I come to think that contemplation and analysis are not merely two kinds of thinking: they are opposed kinds of thinking. Then I realize that the Internet and the novel are opposites as well.

[adblock-left-01]

This idea of the novel is gaining on me: that it is not, except superficially, only a thing to be studied in English classes—that it is a field for thinking, a condensed time-world that is parallel (or adjacent) to ours. That its purpose is less to communicate themes or major recognitions and more to engage the mind, the sensibility, in a process that in its full realization bears upon our living as an ignition to inwardness, which has no larger end, which is the end itself. Enhancement. Deepening. Priming the engines of conjecture. In this way, and for this reason, the novel is the vital antidote to the mentality that the Internet promotes.

This makes an end run around the divisive opposition between “realist” and other modes of fiction (as per the critic James Wood), the point being not the nature of the representation but the quality and feel of the experience.

It would be most interesting, then, to take on a serious experiential-phenomenological “reading” of different kinds of novels—works from what are seen now as different camps.

My real worry has less to do with the overthrow of human intelligence by Google-powered artificial intelligence and more with the rapid erosion of certain ways of thinking—their demotion, as it were. I mean reflection, a contextual understanding of information, imaginative projection. I mean, in my shorthand, intransitive thinking. Contemplation. Thinking for its own sake, non-instrumental, as opposed to transitive thinking, the kind that would depend on a machine-drive harvesting of facts toward some specified end. Ideally, of course, we have both, left brain and right brain in balance. But the evidence keeps coming in that not only are we hypertrophied on the left-brain side, but we are subscribing wholesale to technologies reinforcing that kind of thinking in every aspect of our lives. The digital paradigm. The Google article in The Atlantic was sub­titled “What the Internet Is Doing to Our Brains,” ominous in its suggestion that brain function is being altered; that what we do is changing how we are by reconditioning our neural functioning.

For a long time we have had the idea that the novel is a form that can be studied and explicated, which of course it can be. From this has arisen the dogmatic assumption that the novel is a statement, a meaning-bearing device. Which has, in turn, allowed it to be considered a minor enterprise—for these kinds of meanings, fine for high-school essays on Man’s Inhumanity to Man, cannot compete in the marketplace with the empirical requirements of living in the world.

This message-driven way of looking at the novel allows for the emergence of evaluative grids, the aesthetic distinctions that then create arguments between, say, proponents of realism and proponents of formal experimentation, where one way or the other is seen as better able to bring the reader a weight of content. In this way, at least, the novel has been made to serve the transitive, goal-driven ideology.

But we have been ignoring the deeper nature of fiction. That it is inwardly experiential, intransitive, a mode of contemplation, its purpose being to create for the author and reader a terrain, an arena of liberation, where mind can be different, where mind and imagination can freely combine, where memory and sensation can be deployed, intensified through the specific constraints that any imagined situation allows.

The question comes up for me insistently: Where am I when I am reading a novel? I am “in” the novel, of course, to the degree that it involves me. I may be absorbed, but I am never without some awareness of the world around me—where I am sitting, what else might be going on in the house. Sometimes I think—and this might be true of writing as well—that it is misleading to think of myself as hovering between two places: the conjured and the empirically real. That it is closer to the truth to say that I occupy a third state, one which somehow amalgamates two awarenesses, not unlike that short-lived liminal place I inhabit when I am not yet fully awake, when I am sentient but still riding on the momentum of my sleep. I experience both, at times, as a privileged kind of profundity, an enhancement.

Reading a novel involves a double transposition—a major cognitive switch and then a more specific adaptation. The first is the inward plunge, giving in to the “Let there be another kind of world” premise. No novel can be entered without taking this step. The second involves agreeing to the givens of the work, accepting that this is New York circa 2004 as seen through the eyes of a first-person “I” or a presiding narrator.

Here I have to emphasize the distinction, so often ignored, between the fictional creation “New York” and the existing city. The novel may invoke a place, but it is not simply reporting on the real. The novelist must bring that location, however closely it maps to the real, into the virtual gravitational space of the work. Which is a fabrication.

The vital thing is this shift, which cannot take place, really, without the willingness or intent on the reader’s part to experience a change of mental state. We all know the sensation of duress that comes when we try to read or immerse ourselves in anything when there is no desire. At these times the only thing possible is to proceed mechanically with taking in the words, hoping that they will somehow effect the magic, jump-start the imagination. This is the power of words. They are part of our own sense-making process, and when their designations and connotations are intensified by rhythmic musicality, a receptivity can be created.

The problem we face in a culture saturated with vivid competing stimuli is that the first part of the transaction will be foreclosed by an inability to focus—the first step requires at least that the language be able to reach the reader, that the word sounds and rhythms come alive in the auditory imagination. But where the attention span is keyed to a different level and other kinds of stimulus, it may be that the original connection can’t be made. Or if made, made weakly. Or will prove incapable of being sustained. Imagination must be quickened and then it must be sustained—it must survive interruption and deflection. Formerly, I think, the natural progression of the work, the ongoing development and complication of the situation, if achieved skillfully, would be enough. But more and more comes the complaint, even from practiced readers, that it is hard to maintain attentive focus. The works have presumably not changed. What has changed is either the conditions of reading or something in the cognitive reflexes of the reader. Or both.

All of us now occupy an information space blazing with signals. We have had to evolve coping strategies. Not merely the ability to heed simultaneous cues from different directions, cues of different kinds, but also—this is important—to engage those cues more obliquely. When there is too much information, we graze it lightly, applying focus only where it is most needed. We stare at a computer screen with its layered windows and orient ourselves with a necessarily fractured attention. It is not at all surprising that when we step away and try to apply ourselves to the unfragmented text of a book we have trouble. It is not so easy to suspend the adaptation.

When reading Joseph O’Neill’s Netherland , I am less caught in the action—there is not that much of it—than the tonality. I have the familiar, necessary sense of being privy to the thoughts (and rhythmic inner workings) of Hans, the narrator, and I am interested in him. Though to be accurate I don’t know that it’s as much Hans himself that I am drawn to as the feeling of eavesdropping on another consciousness. All aspects of this compel me, his thoughts and observations, the unexpected detours his memories provide, his efforts to engage in his own feeling-life. I am flickeringly aware as I read that he is being written , and sometimes there is a swerve into literary self-consciousness. But this doesn’t disturb me, doesn’t break the fourth wall: I am perfectly content to see these shifts as the product of the author’s own efforts, which suggests that I tend to view the author as on a continuum with his characters, their extension. It is the proximity to and belief in the other consciousness that matters, more than its source or location. Sometimes everything else seems a contrivance that makes this one connection possible. It is what I have always mainly read for .

This brings me back to the old question, the one I have yet to answer convincingly. What am I doing when I am reading a novel? How do I justify the activity as something more than a way to pass the time? Have all the novels I’ve read in my life really given me any bankable instruction, beyond a deeper feel for words, the possibilities of syntax, and so on? Have I ever seriously been bettered, or even instructed, by my exposure to a theme, some truism about existence over and above the situational proxy-experience? More, that is, than what my own thinking has given me? And how would this work?

I read novels in order to indulge in a concentrated and directed sort of inner activity that is not available in most of my daily transactions. This reading, more than anything else I do, parallels—and thereby tunes up, accentuates—my own inner life, which is ever associative, a shuttling between observation, memory, reflection, emotional recognition, and so forth. A good novel puts all these elements into play in its own unique fashion.

What is the point, the value, of this proxy investment? While I am reading a novel, one that reaches me at a certain level, then the work, the whole of it—pitch, tonality, regard of the world—lives inside me as if inside parentheses, and it acts on me, maybe in a way analogous to how materials in parenthesis act on the sense of the rest of the sentence. My way of looking at others or my regard for the larger directional meaning of my life is subject to pressure or infiltration. I watch people crossing the street at an intersection and something of the character’s or author’s sense of scale—how he inflects the importance of the daily observation—influences my feeling as I wait at the light. And the incidental thoughts that I derive from that watching have a way of resonating with the outlook of the book. Is this a widening or deepening of my experience? Does it in any way make me better fit for living? Hard to say.

What does the novel leave us after it has concluded, resolved its tensions, given us its particular exercise? I always liked Ortega y Gasset’s epigram that “culture is what remains after we’ve forgotten everything we’ve read.” We shouldn’t let the epigrammatical neatness obscure the deeper truth: that there is something over and above the so-called contents of a work that is not only of some value, but that may constitute culture itself.

Having just the other day finished Netherland , I can testify about the residue a novel leaves, not in terms of culture so much as specific personal resonance. Effects and impacts change constantly, and there’s no telling what, if anything, I will find myself preserving a year from now. But even now, with the scenes and characters still available to ready recall, I can see how certain things start to fade and others leave their mark. The process of this tells on me as a reader, no question. With O’Neill’s novel—and for me this is almost always true with fiction—the details of plot fall away first, and so rapidly that in a few months’ time I will only have the most general précis left. I will find myself getting nervous in party conversations if the book is mentioned, my sensible worry being that if I can’t remember what happened in a novel, how it ended, can I say in good conscience that I have read it? Indeed, if I invoke plot memory as my stricture, then I have to confess that I’ve read almost nothing at all, never mind these decades of turning pages.

[adblock-right-01]

What—I ask it again—what has been the point of my reading? One way for me to try to answer is to ask what I do retain. Honest answer? A distinct tonal memory, a conviction of having been inside an author’s own language world, and along with that some hard-to-pinpoint understanding of his or her psyche. Certainly I believe I have gained something important, though to hold that conviction I have to argue that memory access cannot be the sole criterion of impact; that there are other ways that we might possess information, impressions, and even understanding. For I will insist that my reading has done a great deal for me even if I cannot account for most of it. Also, there are different kinds of memory access. You can shine the interrogation lamp in my face and ask me to describe Shirley Hazzard’s The Transit of Venus and I will fail miserably, even though I have listed it as one of the novels I most admire. But I know that traces of its intelligence are in me, that I can, depending on the prompt, call up scenes from that novel in bright, unexpected flashes: it has not vanished completely. And possibly something similar explains Ortega’s “culture is what remains” aphorism.

In a lifetime of reading, which maps closely to a lifetime of forgetting, we store impressions willy-nilly, according to private systems of distribution, keeping factual information on one plane; acquired psychological insight (how humans act when jealous, what romantic compulsion feels like) on another; ideas on a third, and so on. I believe that I know a great deal without knowing what I know. And that, further, insights from one source join with those from another. I may be, unbeknownst to myself, quite a student of human nature based on my reading. But I no longer know in every case that my insights are from reading. The source may fade as the sensation remains.

But there is one detail from Netherland that did leave an especially bright mark on me and may prove to be an index to everything else. O’Neill describes how Hans, in his lonely separation from his wife and child (he is in New York, they are in London), makes use of the Google satellite function on his computer. “Starting with a hybrid map of the United States,” he tells,

I moved the navigation box across the north Atlantic and began my fall from the stratosphere: successively, into a brown and greenish Europe … From the central maze of mustard roads I followed the river southwest into Putney, zoomed in between the Lower and Upper Richmond Roads, and, with the image purely photographic, descended finally on Landford Road. It was always a clear and beautiful day—and wintry, if I correctly recall, with the trees pale brown and the shadows long. From my balloonist’s vantage point, aloft at a few hundred meters, the scene was depthless. My son’s dormer was visible, and the blue inflated pool and the red BMW; but there was no way to see more, or deeper. I was stuck.

At the very end of the novel, Hans reverses vantage. That is, he pursues the satellite view from England—he has returned—looking to see if he can see the cricket field where he worked on Staten Island with his friend Chuck Ramkissoon:

I fall again, as low as I can. There’s Chuck’s field. It is brown—the grass has burned—but it is still there. There’s no trace of a batting square. The equipment shed is gone. I’m just seeing a field. I stare at it for a while. I am contending with a variety of reactions, and consequently, with a single brush on the touch pad I flee upward into the atmosphere and at once have in my sights the physical planet, submarine wrinkles and all—have the option, if so moved, to go anywhere.

I find this obsession of his intensely moving, a deep reflection of his personality; I also find it quite effective as an image device. To begin with, the contemplation of such intensified action-at-a-distance fascinates—the idea that one even can do such a thing. And I confess that I stopped reading after the first passage and went right upstairs to my laptop to see if it was indeed possible to get such access. It is—though I stopped short of downloading what I needed out of fear that bringing the potentiality of a God vantage into my little machine might overwhelm its circuitry.

This idea of vantage is to be considered. Not only for what it gives the average user: sophisticated visual access to the whole planet (I find it hard to even fathom this—I who after years of flying still thrill like a child when the plane descends in zoom-lens increments, turning a toy city by degrees into an increasingly material reality), but also for the uncanny way in which it offers a correlative to the novelist’s swooping freedom. Still, Hans can only get so close—he is constrained by the limits of technology, and, necessarily, by visual exteriority. The novelist can complete the action, moving right in through the dormer window, and then, if he has set it up thus, into the minds of any of the characters he has found/created there.

This image is relevant in another, more conceptual way. The reality O’Neill has so compellingly described, that of swooping access, is part of the futurama that is our present. The satellite capability stands for many other kinds of capabilities, for the whole new reach of information technology, which more than any transformation in recent decades has changed how we live and—in ways we can’t possibly measure—who we are. It questions the place of fiction, literature, art in general, in our time. Against such potency, one might ask, how can beauty—how can the self’s expressions—hold a plea? The very action that the author renders so finely poses an indirect threat to his livelihood. No, no —comes the objection. Isn’t the whole point that he has taken it over with his imagination, on behalf of the imagination? Yes, of course, and it is a striking seizure. But we should not be too complacent about the novelist’s superior reach. For these very things—all of the operations and abilities that we now claim—are encroaching on every flank. Yes, O’Neill can capture in beautiful sentences the sensation of a satellite eye homing in on its target, but the fact that such a power is available to the average user leaches from the overall power of the novel-as-genre. In giving us yet another instrument of access, the satellite eye reduces by some factor the operating power of imagination itself. The person who can make a transatlantic swoop will, in part for having that power, be less able, or less willing, or both, to read the labored sequences that comprise any written work of art. Not just his satellite ventures, but the sum of his Internet interactions, which are other aspects of our completely transformed information culture.

After all my jibes against the decontextualizing power of the search engine, it is to Google I go this morning, hoping to track down the source of Nabokov’s phrase “aesthetic bliss.” And indeed, five or six entries locate the quote from his afterword to Lolita : “For me a work of fiction exists only insofar as it affords me what I shall bluntly call aesthetic bliss.” The phrase has been in my mind in the last few days, following my reading of Netherland and my attempts to account for the value of that particular kind of reading experience. “Aesthetic bliss” is one kind of answer—the effects on me of certain prose styles, like Nabokov’s own, or John Banville’s, or Virginia Woolf’s. But the phrase sounds trivial; it sounds like mere connoisseurship, a self-congratulatory mandarin business. It’s far more complicated than any mere swooning over pretty words and phrases. Aesthetic bliss. To me it expresses the delight that comes when the materials, the words, are working at their highest pitch, bringing sensation to life in the mind.

Sensation … I can imagine an objection, a voice telling me that sensation itself is trivial, not as important as idea , as theme. As if there is a hierarchy with ideas on one level, and psychological insights, and far below the re-creation of the textures of experience and inward process. I obviously don’t agree, nor does my reading sensibility, which, as I’ve confessed already, does not go seeking after themes and usually forgets them soon after taking them in. What thou lovest well remains—and for me it is language in this condition of alert, sensuous precision, language that does not forget the world of nouns. I’m thinking that one part of this project will need to be a close reading of and reflection upon certain passages that are for me certifiably great. I have to find occasion to ask—and examine closely—what happens when a string of words gets something exactly right.

We always hear arguments about how the original time-passing function of the triple-decker novel has been rendered obsolete by competing media. What we hear less is the idea that the novel serves and embodies a certain interior pace, and that this has been shouted down (but not eliminated) by the transformations of modern life. Reading requires a synchronization of one’s reflective rhythms to those of the work. It is one thing to speed-read a dialogue-rich contemporary satire, another to engage with the nuanced thought-world of Norman Rush’s characters in Mating . The reader adjusts to the author, not vice versa, and sometimes that adjustment feels too difficult. The triple-decker was, I’m theorizing, synchronous with the basic heart rate of its readers, and is now no longer so.

But the issue is more complicated still. For it’s one thing to say that sensibility is timed to certain rhythms—faster, slower—another to reflect that what had once been a singular entity is now subject to near-constant fragmentation by the turbulent dynamic of life as we live it. Concentration can be had, but for most of us it is only by setting oneself against the things that routinely destroy it.

Serious literary work has levels. The engaged reader takes in not only the narrative premise and the craft of its realization, but also the resonance—that which the author creates, deliberately, through her use of language. It is the secondary power of good writing, often the ulterior motive of the writing. The two levels operate on a lag, with the resonance accumulating behind the sense, building a linguistic density that is the verbal equivalent of an aftertaste, or the “finish.” The reader who reads without directed concentration, who skims, or even just steps hurriedly across the surface, is missing much of the real point of the work; he is gobbling his foie gras.

Concentration is no longer a given; it has to be strategized, fought for. But when it is achieved it can yield experiences that are more rewarding for being singular and hard-won. To achieve deep focus nowadays is also to have struck a blow against the dissipation of self; it is to have strengthened one’s essential position.

Sven Birkerts edits the literary journal Agni and directs the Bennington Writing Seminars. He is the author of eight books, most recently The Art of Time in Memoir: Then, Again. He is completing The Other Walk , a collection of short prose.

smarty_blues

● NEWSLETTER

Logo

Report: What Does it Mean to be Human in the Digital Age?

Watch a video of the discussion  here .

What does it mean to be human in the digital age ? The ongoing convergence between science fiction and real-life technological advances makes it easy to imagine ‘the digital age’ as an era of conflict between human and artificial intelligences: of new technology as an alien tool to be managed and contained. Yet if we see the development of writing and the invention of printed books – both revolutionary technologies in their day – as adding to, rather than diminishing, the richness of human experience, discussions of the digital also have to move beyond simple oppositions of human emotions and sensations versus efficient and uncaring machines. On Thursday night, four speakers came together to think about what happens when the two overlap – and what might happen in the future. As a commenter on Twitter pointed out, the picture of two hands touching which appears on posters for this event, run by TORCH, might look at first glance like an oppositional encounter between human and machine. But, looked at another way,  both  the hands “ are human, media(ted) ”: technology and human self-expression seamlessly combined and promising to spark new forms of life.

Introduced and chaired by Dame Lynne Brindley, the Master of Pembroke College and former Chief Executive of the British Library, and kicking off the TORCH Headline Series  Humanities and the Digital Age , the discussion touched on examples of digitalization in libraries, museums, and theatres; its shaping of vast global communities, and its reshaping of the most intimate processes of the human mind. During and after the event, a lively Twitter debate around the hashtag  #HumDigAge  (including both viewers inside the lecture theatre and those watching a global livestream) provided its own evidence for the potential of digital technology to mediate thoughtful communication.

The first three speakers explored issues of memory: the internet’s almost infinite but indiscriminate capacity to store information. This has complex effects on our sense of connection to history, art, and even to our own lives and communities. Dr  Chris Fletcher , Keeper of Special Collections at the Bodleian Library, described how digitalization is revolutionizing the ways in which libraries and archives preserve documents of the past, and proposed some problems that these institutions may face in the future. Even as demand has grown for keyword scanning and remote access, he explained, the desire to experience texts in person continues and even increases; with a rise in physical visits to the Bodleian’s reading rooms, a flourishing academic field focusing on material culture, and a widespread – and, for buyers of rare books, commercially vital – fascination with historical items with qualities of “concentrated thingness”, such as handwritten manuscripts.  Diane Lees  also spoke from the perspective of museums and archives on the theme of public memory, sharing her experiences as Director-General of the Imperial War Museums, running public engagement projects in which “digital is being used to make humans feel more human”. In the most effective of these, crowdsourcing techniques have been used to build new forms of commemoration and emotional connection to the past.

Emma Smith  countered this emphasis on humanity’s need to remember with a call for tactical forgetting.

At a certain point, she argued, unlimited memory become less an ability to remember, and more an inability to select and manage the overwhelming burden of data about the past: a condition she compared to a neurological condition called hyperthymestic syndrome and the plight of Jorge Luis Borges’ character ‘Funes the Memorious’. A Professor of Shakespeare Studies at Oxford, Smith focused her talk on the digital recording and archiving of live theatre performances. Nevertheless, her argument for the creative gaps and mistakes of human (as distinct from computer) memory has evident relevance to many other aspects of a society which increasingly captures, records, and catalogues online its every moment and emotion (while, as Smith noted, as individuals we find ourselves “outsourcing the archival part of the brain to the internet”). Indeed, the paradoxical and perhaps troubling state of being “recorded live”, she suggested, might be one answer to the question of what it means to be human in the current digital world.

If the themes of these speakers can be summed up as, respectively, preserving, remembering and forgetting,  Tom Chatfield  turned to the problem of (as in the title of  one of his books )  thriving  in a digital age. Human-machine interactions force us to question what humanity  is , and what the role of an individual human might be, within vast communities newly linked by shared technology and shared consciousness. But for humanity and the digital to coexist, he warned, we need to be clear about the differences between humans – who might want to think of themselves as rational and self-contained units, but are “actually intensely social, emotional, intractably embodied creatures” – and machines – which might appear to have agency, understanding and motivation, but can only carry out actions programmed by people. He challenged the audience – and the rest of the TORCH  Humanities in the Digital Age  series – to imagine: how can we describe humanity’s relationship to technology in all its richness and complexity, without anthropomorphism or sentimentality. And, as the digital develops in unpredictable, even unimaginable ways in the future, “what does a successful collaboration between humans and technology look like?”

All images courtesy of  John Cairns .

Ruth Scobie

Humanities & the Digital Age

torch lecture by john cairns 21 1 16 14

April 11, 2013

15 min read

The Reading Brain in the Digital Age: The Science of Paper versus Screens

E-readers and tablets are becoming more popular as such technologies improve, but research suggests that reading on paper still boasts unique advantages

By Ferris Jabr

the digital age essay

Getty Images

On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing . By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

In a viral YouTube video from October 2011 a one-year-old girl sweeps her fingers across an iPad's touchscreen, shuffling groups of icons. In the following scenes she appears to pinch, swipe and prod the pages of paper magazines as though they too were screens. When nothing happens, she pushes against her leg, confirming that her finger works just fine—or so a title card would have us believe. The girl's father, Jean-Louis Constanza , presents "A Magazine Is an iPad That Does Not Work" as naturalistic observation—a Jane Goodall among the chimps moment—that reveals a generational transition. "Technology codes our minds," he writes in the video's description. "Magazines are now useless and impossible to understand, for digital natives"—that is, for people who have been interacting with digital technologies from a very early age. Perhaps his daughter really did expect the paper magazines to respond the same way an iPad would. Or maybe she had no expectations at all—maybe she just wanted to touch the magazines. Babies touch everything . Young children who have never seen a tablet like the iPad or an e-reader like the Kindle will still reach out and run their fingers across the pages of a paper book; they will jab at an illustration they like; heck, they will even taste the corner of a book. Today's so-called digital natives still interact with a mix of paper magazines and books, as well as tablets, smartphones and e-readers; using one kind of technology does not preclude them from understanding another. Nevertheless, the video brings into focus an important question: How exactly does the technology we use to read change the way we read? How reading on screens differs from reading on paper is relevant not just to the youngest among us , but to just about everyone who reads—to anyone who routinely switches between working long hours in front of a computer at the office and leisurely reading paper magazines and books at home; to people who have embraced e-readers for their convenience and portability, but admit that for some reason they still prefer reading on paper; and to those who have already vowed to forgo tree pulp entirely. As digital texts and technologies become more prevalent, we gain new and more mobile ways of reading—but are we still reading as attentively and thoroughly? How do our brains respond differently to onscreen text than to words on paper? Should we be worried about dividing our attention between pixels and ink or is the validity of such concerns paper-thin? Since at least the 1980s researchers in many different fields—including psychology, computer engineering, and library and information science—have investigated such questions in more than one hundred published studies. The matter is by no means settled. Before 1992 most studies concluded that people read slower, less accurately and less comprehensively on screens than on paper. Studies published since the early 1990s , however, have produced more inconsistent results: a slight majority has confirmed earlier conclusions, but almost as many have found few significant differences in reading speed or comprehension between paper and screens. And recent surveys suggest that although most people still prefer paper—especially when reading intensively—attitudes are changing as tablets and e-reading technology improve and reading digital books for facts and fun becomes more common. In the U.S., e-books currently make up between 15 and 20 percent of all trade book sales. Even so, evidence from laboratory experiments , polls and consumer reports indicates that modern screens and e-readers fail to adequately recreate certain tactile experiences of reading on paper that many people miss and, more importantly, prevent people from navigating long texts in an intuitive and satisfying way. In turn, such navigational difficulties may subtly inhibit reading comprehension. Compared with paper, screens may also drain more of our mental resources while we are reading and make it a little harder to remember what we read when we are done. A parallel line of research focuses on people's attitudes toward different kinds of media. Whether they realize it or not, many people approach computers and tablets with a state of mind less conducive to learning than the one they bring to paper.

"There is physicality in reading," says developmental psychologist and cognitive scientist Maryanne Wolf of Tufts University, "maybe even more than we want to think about as we lurch into digital reading—as we move forward perhaps with too little reflection. I would like to preserve the absolute best of older forms, but know when to use the new." Navigating textual landscapes Understanding how reading on paper is different from reading on screens requires some explanation of how the brain interprets written language. We often think of reading as a cerebral activity concerned with the abstract—with thoughts and ideas, tone and themes, metaphors and motifs. As far as our brains are concerned, however, text is a tangible part of the physical world we inhabit. In fact, the brain essentially regards letters as physical objects because it does not really have another way of understanding them. As Wolf explains in her book Proust and the Squid , we are not born with brain circuits dedicated to reading. After all, we did not invent writing until relatively recently in our evolutionary history, around the fourth millennium B.C. So the human brain improvises a brand-new circuit for reading by weaving together various regions of neural tissue devoted to other abilities, such as spoken language, motor coordination and vision. Some of these repurposed brain regions are specialized for object recognition —they are networks of neurons that help us instantly distinguish an apple from an orange, for example, yet classify both as fruit. Just as we learn that certain features—roundness, a twiggy stem, smooth skin—characterize an apple, we learn to recognize each letter by its particular arrangement of lines, curves and hollow spaces. Some of the earliest forms of writing, such as Sumerian cuneiform , began as characters shaped like the objects they represented —a person's head, an ear of barley, a fish. Some researchers see traces of these origins in modern alphabets: C as crescent moon, S as snake. Especially intricate characters—such as Chinese hanzi and Japanese kanji —activate motor regions in the brain involved in forming those characters on paper: The brain literally goes through the motions of writing when reading, even if the hands are empty. Researchers recently discovered that the same thing happens in a milder way when some people read cursive. Beyond treating individual letters as physical objects, the human brain may also perceive a text in its entirety as a kind of physical landscape. When we read, we construct a mental representation of the text in which meaning is anchored to structure. The exact nature of such representations remains unclear, but they are likely similar to the mental maps we create of terrain—such as mountains and trails—and of man-made physical spaces, such as apartments and offices. Both anecdotally and in published studies , people report that when trying to locate a particular piece of written information they often remember where in the text it appeared. We might recall that we passed the red farmhouse near the start of the trail before we started climbing uphill through the forest; in a similar way, we remember that we read about Mr. Darcy rebuffing Elizabeth Bennett on the bottom of the left-hand page in one of the earlier chapters. In most cases, paper books have more obvious topography than onscreen text. An open paperback presents a reader with two clearly defined domains—the left and right pages—and a total of eight corners with which to orient oneself. A reader can focus on a single page of a paper book without losing sight of the whole text: one can see where the book begins and ends and where one page is in relation to those borders. One can even feel the thickness of the pages read in one hand and pages to be read in the other. Turning the pages of a paper book is like leaving one footprint after another on the trail—there's a rhythm to it and a visible record of how far one has traveled. All these features not only make text in a paper book easily navigable, they also make it easier to form a coherent mental map of the text. In contrast, most screens, e-readers, smartphones and tablets interfere with intuitive navigation of a text and inhibit people from mapping the journey in their minds. A reader of digital text might scroll through a seamless stream of words, tap forward one page at a time or use the search function to immediately locate a particular phrase—but it is difficult to see any one passage in the context of the entire text. As an analogy, imagine if Google Maps allowed people to navigate street by individual street, as well as to teleport to any specific address, but prevented them from zooming out to see a neighborhood, state or country. Although e-readers like the Kindle and tablets like the iPad re-create pagination—sometimes complete with page numbers, headers and illustrations—the screen only displays a single virtual page: it is there and then it is gone. Instead of hiking the trail yourself, the trees, rocks and moss move past you in flashes with no trace of what came before and no way to see what lies ahead. "The implicit feel of where you are in a physical book turns out to be more important than we realized," says Abigail Sellen of Microsoft Research Cambridge in England and co-author of The Myth of the Paperless Office . "Only when you get an e-book do you start to miss it. I don't think e-book manufacturers have thought enough about how you might visualize where you are in a book." At least a few studies suggest that by limiting the way people navigate texts, screens impair comprehension. In a study published in January 2013 Anne Mangen of the University of Stavanger in Norway and her colleagues asked 72 10th-grade students of similar reading ability to study one narrative and one expository text, each about 1,500 words in length. Half the students read the texts on paper and half read them in pdf files on computers with 15-inch liquid-crystal display (LCD) monitors. Afterward, students completed reading-comprehension tests consisting of multiple-choice and short-answer questions, during which they had access to the texts. Students who read the texts on computers performed a little worse than students who read on paper. Based on observations during the study, Mangen thinks that students reading pdf files had a more difficult time finding particular information when referencing the texts. Volunteers on computers could only scroll or click through the pdfs one section at a time, whereas students reading on paper could hold the text in its entirety in their hands and quickly switch between different pages. Because of their easy navigability, paper books and documents may be better suited to absorption in a text. "The ease with which you can find out the beginning, end and everything inbetween and the constant connection to your path, your progress in the text, might be some way of making it less taxing cognitively, so you have more free capacity for comprehension," Mangen says. Supporting this research, surveys indicate that screens and e-readers interfere with two other important aspects of navigating texts: serendipity and a sense of control. People report that they enjoy flipping to a previous section of a paper book when a sentence surfaces a memory of something they read earlier, for example, or quickly scanning ahead on a whim. People also like to have as much control over a text as possible—to highlight with chemical ink, easily write notes to themselves in the margins as well as deform the paper however they choose. Because of these preferences—and because getting away from multipurpose screens improves concentration—people consistently say that when they really want to dive into a text, they read it on paper. In a 2011 survey of graduate students at National Taiwan University, the majority reported browsing a few paragraphs online before printing out the whole text for more in-depth reading. A 2008 survey of millennials (people born between 1980 and the early 2000s) at Salve Regina University in Rhode Island concluded that, "when it comes to reading a book, even they prefer good, old-fashioned print". And in a 2003 study conducted at the National Autonomous University of Mexico, nearly 80 percent of 687 surveyed students preferred to read text on paper as opposed to on a screen in order to "understand it with clarity". Surveys and consumer reports also suggest that the sensory experiences typically associated with reading—especially tactile experiences—matter to people more than one might assume. Text on a computer, an e-reader and—somewhat ironically—on any touch-screen device is far more intangible than text on paper. Whereas a paper book is made from pages of printed letters fixed in a particular arrangement, the text that appears on a screen is not part of the device's hardware—it is an ephemeral image. When reading a paper book, one can feel the paper and ink and smooth or fold a page with one's fingers; the pages make a distinctive sound when turned; and underlining or highlighting a sentence with ink permanently alters the paper's chemistry. So far, digital texts have not satisfyingly replicated this kind of tactility (although some companies are innovating, at least with keyboards ). Paper books also have an immediately discernible size, shape and weight. We might refer to a hardcover edition of War and Peace as a hefty tome or a paperback Heart of Darkness as a slim volume. In contrast, although a digital text has a length—which is sometimes represented with a scroll or progress bar—it has no obvious shape or thickness. An e-reader always weighs the same, regardless of whether you are reading Proust's magnum opus or one of Hemingway's short stories. Some researchers have found that these discrepancies create enough " haptic dissonance " to dissuade some people from using e-readers. People expect books to look, feel and even smell a certain way; when they do not, reading sometimes becomes less enjoyable or even unpleasant. For others, the convenience of a slim portable e-reader outweighs any attachment they might have to the feel of paper books. Exhaustive reading Although many old and recent studies conclude that people understand what they read on paper more thoroughly than what they read on screens, the differences are often small. Some experiments, however, suggest that researchers should look not just at immediate reading comprehension, but also at long-term memory. In a 2003 study Kate Garland of the University of Leicester and her colleagues asked 50 British college students to read study material from an introductory economics course either on a computer monitor or in a spiral-bound booklet. After 20 minutes of reading Garland and her colleagues quizzed the students with multiple-choice questions. Students scored equally well regardless of the medium, but differed in how they remembered the information. Psychologists distinguish between remembering something—which is to recall a piece of information along with contextual details, such as where, when and how one learned it—and knowing something, which is feeling that something is true without remembering how one learned the information. Generally, remembering is a weaker form of memory that is likely to fade unless it is converted into more stable, long-term memory that is "known" from then on. When taking the quiz, volunteers who had read study material on a monitor relied much more on remembering than on knowing, whereas students who read on paper depended equally on remembering and knowing. Garland and her colleagues think that students who read on paper learned the study material more thoroughly more quickly; they did not have to spend a lot of time searching their minds for information from the text, trying to trigger the right memory—they often just knew the answers. Other researchers have suggested that people comprehend less when they read on a screen because screen-based reading is more physically and mentally taxing than reading on paper. E-ink is easy on the eyes because it reflects ambient light just like a paper book, but computer screens, smartphones and tablets like the iPad shine light directly into people's faces. Depending on the model of the device, glare, pixilation and flickers can also tire the eyes. LCDs are certainly gentler on eyes than their predecessor, cathode-ray tubes (CRT), but prolonged reading on glossy self-illuminated screens can cause eyestrain, headaches and blurred vision. Such symptoms are so common among people who read on screens—affecting around 70 percent of people who work long hours in front of computers—that the American Optometric Association officially recognizes computer vision syndrome . Erik Wästlund of Karlstad University in Sweden has conducted some particularly rigorous research on whether paper or screens demand more physical and cognitive resources. In one of his experiments 72 volunteers completed the Higher Education Entrance Examination READ test—a 30-minute, Swedish-language reading-comprehension exam consisting of multiple-choice questions about five texts averaging 1,000 words each. People who took the test on a computer scored lower and reported higher levels of stress and tiredness than people who completed it on paper. In another set of experiments 82 volunteers completed the READ test on computers, either as a paginated document or as a continuous piece of text. Afterward researchers assessed the students' attention and working memory, which is a collection of mental talents that allow people to temporarily store and manipulate information in their minds. Volunteers had to quickly close a series of pop-up windows, for example, sort virtual cards or remember digits that flashed on a screen. Like many cognitive abilities, working memory is a finite resource that diminishes with exertion. Although people in both groups performed equally well on the READ test, those who had to scroll through the continuous text did not do as well on the attention and working-memory tests. Wästlund thinks that scrolling—which requires a reader to consciously focus on both the text and how they are moving it—drains more mental resources than turning or clicking a page, which are simpler and more automatic gestures. A 2004 study conducted at the University of Central Florida reached similar conclusions. Attitude adjustments An emerging collection of studies emphasizes that in addition to screens possibly taxing people's attention more than paper, people do not always bring as much mental effort to screens in the first place. Subconsciously, many people may think of reading on a computer or tablet as a less serious affair than reading on paper. Based on a detailed 2005 survey of 113 people in northern California, Ziming Liu of San Jose State University concluded that people reading on screens take a lot of shortcuts—they spend more time browsing, scanning and hunting for keywords compared with people reading on paper, and are more likely to read a document once, and only once. When reading on screens, people seem less inclined to engage in what psychologists call metacognitive learning regulation—strategies such as setting specific goals, rereading difficult sections and checking how much one has understood along the way. In a 2011 experiment at the Technion–Israel Institute of Technology, college students took multiple-choice exams about expository texts either on computers or on paper. Researchers limited half the volunteers to a meager seven minutes of study time; the other half could review the text for as long as they liked. When under pressure to read quickly, students using computers and paper performed equally well. When managing their own study time, however, volunteers using paper scored about 10 percentage points higher. Presumably, students using paper approached the exam with a more studious frame of mind than their screen-reading peers, and more effectively directed their attention and working memory. Perhaps, then, any discrepancies in reading comprehension between paper and screens will shrink as people's attitudes continue to change. The star of "A Magazine Is an iPad That Does Not Work" is three-and-a-half years old today and no longer interacts with paper magazines as though they were touchscreens, her father says. Perhaps she and her peers will grow up without the subtle bias against screens that seems to lurk in the minds of older generations. In current research for Microsoft, Sellen has learned that many people do not feel much ownership of e-books because of their impermanence and intangibility: "They think of using an e-book, not owning an e-book," she says. Participants in her studies say that when they really like an electronic book, they go out and get the paper version. This reminds Sellen of people's early opinions of digital music, which she has also studied. Despite initial resistance, people love curating, organizing and sharing digital music today. Attitudes toward e-books may transition in a similar way, especially if e-readers and tablets allow more sharing and social interaction than they currently do. Books on the Kindle can only be loaned once , for example. To date, many engineers, designers and user-interface experts have worked hard to make reading on an e-reader or tablet as close to reading on paper as possible. E-ink resembles chemical ink and the simple layout of the Kindle's screen looks like a page in a paperback. Likewise, Apple's iBooks attempts to simulate the overall aesthetic of paper books, including somewhat realistic page-turning. Jaejeung Kim of KAIST Institute of Information Technology Convergence in South Korea and his colleagues have designed an innovative and unreleased interface that makes iBooks seem primitive. When using their interface, one can see the many individual pages one has read on the left side of the tablet and all the unread pages on the right side, as if holding a paperback in one's hands. A reader can also flip bundles of pages at a time with a flick of a finger. But why, one could ask, are we working so hard to make reading with new technologies like tablets and e-readers so similar to the experience of reading on the very ancient technology that is paper? Why not keep paper and evolve screen-based reading into something else entirely? Screens obviously offer readers experiences that paper cannot. Scrolling may not be the ideal way to navigate a text as long and dense as Moby Dick , but the New York Times , Washington Post , ESPN and other media outlets have created beautiful, highly visual articles that depend entirely on scrolling and could not appear in print in the same way. Some Web comics and infographics turn scrolling into a strength rather than a weakness. Similarly, Robin Sloan has pioneered the tap essay for mobile devices. The immensely popular interactive Scale of the Universe tool could not have been made on paper in any practical way. New e-publishing companies like Atavist offer tablet readers long-form journalism with embedded interactive graphics, maps, timelines, animations and sound tracks. And some writers are pairing up with computer programmers to produce ever more sophisticated interactive fiction and nonfiction in which one's choices determine what one reads, hears and sees next. When it comes to intensively reading long pieces of plain text, paper and ink may still have the advantage. But text is not the only way to read.

the digital age essay

  • Department of Economic and Social Affairs Social Inclusion
  • Meet our Director
  • Milestones for Inclusive Social Development.
  • Second World Summit For Social Development 2025
  • World Summit For Social Development 1995
  • 2030 Agenda for Sustainable Development
  • Sustainable Development Goals (SDGs)
  • UN Common Agenda
  • International Days
  • International Years
  • Social Media
  • Second World Summit 2025
  • World Summit 1995
  • Social Development Issues
  • Cooperatives
  • Digital Inclusion
  • Employment & Decent Work
  • Indigenous Peoples
  • Poverty Eradication
  • Social Inclusion
  • Social Protection
  • Sport for Development & Peace
  • Commission for Social Development (CSocD)
  • Conference of States Parties to the CRPD (COSP)
  • General Assembly Second Committee
  • General Assembly Third Committee
  • Open-Ended Working Group (OEWG) on Ageing
  • United Nations Permanent Forum on Indigenous Issues (UNPFII)
  • Publications
  • World Social Report
  • World Youth Report
  • UN Flagship Report On Disability And Development
  • State Of The World’s Indigenous Peoples
  • Policy Briefs
  • General Assembly Reports and Resolutions
  • ECOSOC Reports and Resolutions
  • UNPFII Recommendations Database
  • Capacity Development
  • Civil Society
  • Expert Group Meetings
  • Information and communication technologies (ICTs)

Follow-up to the International Year of Older Persons: Second World Assembly on Ageing

© UNICEF Internet connectivity in schools is essential for strengthening education systems.

Youth digital engagement crucial for achieving SDGs: Guterres

Photo:© UNESCO-UNEVOC/Danilo Vitoriano

From Clicks to Progress: Youth Digital Pathways for Sustainable Development

  • Search Search

Home — Essay Samples — Social Issues — Cyber Bullying — Data Privacy: Safeguarding Personal Information in the Digital Age

test_template

Data Privacy: Safeguarding Personal Information in The Digital Age

  • Categories: Cyber Bullying

About this sample

close

Words: 626 |

Published: Jun 13, 2024

Words: 626 | Page: 1 | 4 min read

Image of Dr. Oliver Johnson

Cite this Essay

Let us write you an essay from scratch

  • 450+ experts on 30 subjects ready to help
  • Custom essay delivered in as few as 3 hours

Get high-quality help

author

Prof. Kifaru

Verified writer

  • Expert in: Social Issues

writer

+ 120 experts online

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy . We’ll occasionally send you promo and account related email

No need to pay just yet!

Related Essays

2 pages / 726 words

3 pages / 1297 words

5 pages / 2267 words

3 pages / 1235 words

Remember! This is just a sample.

You can get your custom paper by one of our expert writers.

121 writers online

Still can’t find what you need?

Browse our vast selection of original essay samples, each expertly formatted and styled

Related Essays on Cyber Bullying

Imagine waking up every day, dreading the moment you switch on your computer or check your phone. The fear of encountering hurtful messages or malicious comments is ever-present. This is the reality for millions of individuals [...]

Cyberbullying is a pervasive and dangerous issue that continues to negatively impact individuals, particularly young people, worldwide. As we have discussed throughout this essay, cyberbullying can have severe and lasting [...]

Cyberbullying has become an alarming issue in today's digital age. With the widespread use of technology and social media platforms, individuals are increasingly vulnerable to online harassment and abuse. This essay explores the [...]

Social media platforms have become an integral part of our lives, allowing us to express ourselves freely and connect with others. However, with this freedom comes responsibility. The rise of online hate speech, fake news, and [...]

It took a considerable amount of time for people to realise what are the different forms of stalking and a lot more to gauge and understand its effects. Its impact, especially on young minds, has long-lasting effects. Laws and [...]

Bullying has become a major problem, and the use of the internet has just made it worse. Cyber bullying is bullying done by using technology; it can be done with computers, phones, and the biggest one social media. Children need [...]

Related Topics

By clicking “Send”, you agree to our Terms of service and Privacy statement . We will occasionally send you account related emails.

Where do you want us to send this sample?

By clicking “Continue”, you agree to our terms of service and privacy policy.

Be careful. This essay is not unique

This essay was donated by a student and is likely to have been used and submitted before

Download this Sample

Free samples may contain mistakes and not unique parts

Sorry, we could not paraphrase this essay. Our professional writers can rewrite it and get you a unique paper.

Please check your inbox.

We can write you a custom essay that will follow your exact instructions and meet the deadlines. Let's fix your grades together!

Get Your Personalized Essay in 3 Hours or Less!

We use cookies to personalyze your web-site experience. By continuing we’ll assume you board with our cookie policy .

  • Instructions Followed To The Letter
  • Deadlines Met At Every Stage
  • Unique And Plagiarism Free

the digital age essay

Kappan Online

  • About PDK International
  • About Kappan
  • Issue Archive
  • Permissions
  • About The Grade
  • Writers Guidelines
  • Upcoming Themes
  • Artist Guidelines
  • Subscribe to Kappan
  • Get the Kappan weekly email

Select Page

Advertisement

Reading in a digital age

By Naomi S. Baron | Sep 25, 2017 | Feature Article

Reading in a digital age

Even millennials acknowledge that whether you read on paper or a digital screen affects your attention on words and the ideas behind them.What are the implications for how we teach?

FB_1710_Baron_15

The digital revolution has done much to reshape how students read, write, and access information in school. Once-handwritten essays are now word-processed. Encyclopedias have yielded to online searches. One-size-fits-all teaching is tilting toward personalized learning. And a growing number of assignments ask students to read on digital screens rather than in print.

Yet how much do we actually know about the educational implications of this emphasis on using digital media? In particular, when it comes to reading, do digital screens make it easier or harder for students to pay careful attention to words and the ideas behind them, or is there no difference from print?

Over the past decade, researchers in various countries have been comparing how much readers comprehend and remember when they read in each medium. In nearly all cases, there was essentially no difference between the testing scenarios. (See Baron, Calixte, & Havewala, 2017 for a review.) However, such findings need to be taken with a grain of salt. These studies have typically focused on captive research subjects, mostly college students who commonly are paid to participate in an experiment or who participate to fill a course requirement. Ask them to read passages and then answer SAT-style comprehension questions, and they tend to do so reasonably carefully, whether they read on a screen or on paper. Under those conditions, it’s not surprising that their performance would be consistent across platforms.

But the devil may lie in the details. When researchers have altered the testing conditions or the types of questions they ask, discrepancies have appeared, suggesting that the medium does in fact matter. For example, Ackerman and Goldsmith (2011) observed that when participants could choose how much time to spend on digital versus print reading, they devoted less to reading onscreen and had lower comprehension scores. Schugar and colleagues (2011) found that participants reported using fewer study strategies (such as highlighting, note-taking, or bookmarking) when reading digitally. Kaufman and Flanagan (2016) noted that when reading in print, study participants did better answering abstract questions that required inferential reasoning; by contrast, participants scored better reading digitally when answering concrete questions. Researchers at the University of Reading (Dyson & Haselgrove, 2000) observed that reading comprehension declined when students were scrolling as they read, rather than focusing on stationary chunks of text.

What about research with younger children? Schugar and Schugar found that middle grades students comprehended more when reading print than when using e-books on an iPad (Paul, 2014) — interactive features of the digital platform apparently distracted readers from the textual content. However, the same researchers observed that among K-6 readers, e-books generated a higher level of engagement (Schugar, Smith, & Schugar, 2013). Working with high school students in Norway, Anne Mangen and her colleagues (2013) concluded that print yielded better comprehension scores. Mangen argues that print makes it easier for students to create cognitive maps of the entire passage they are reading.

For educators, though, the real question is not how students perform in experiments. More important is what they do when reading on their own: Do they take as much time reading in both media? Do they read as carefully? In short, in their everyday lives, how much and what sort of attention do they pay to what they are reading?

Questions about reading in a digital age

History is strewn with examples of people worrying that new technologies will undermine older skills. In the late 5th century BC, when the spread of writing was challenging an earlier oral tradition, Plato expressed concern (in the Phaedrus) that “trust in writing . . . will discourage the use of [our] own memory.” Writing has proven an invaluable technology. Digital media have as well. These new tools make it possible for millions of people to have access to texts that would otherwise be beyond their reach, financially or physically. Computer-driven devices enable us to expand our scope of educational and recreational experience to include audio and visual materials, often on demand. But as with writing, it’s an empirical question what the pros and cons are of the old and the new. Writing is a vital cultural tool, but there is little doubt it discourages memory skills.

When we think about the educational implications of digital reading, we need to study the issue with open minds, not make presuppositions about advantages and disadvantages.

To help forward this exploration, my own research has been tackling three intertwined questions about reading in a digital age. First, what do readers tell us directly about their print versus digital reading habits? Second, what else do readers reveal about their attitudes toward reading in print versus onscreen, and what can we infer about how well they pay attention when reading in each medium? The third question is more broad-stroked: In the current technological climate, are we changing the very notion of what it means to read?

Students are more likely to multitask when reading onscreen than in print — especially in the U.S. where 85% reported multitasking when reading digitally, compared with 26% for print.

I’ve been investigating these questions for about a half-dozen years, beginning with some pilot studies in the U.S. (Baron, 2013) and continuing with surveys (between 2013 and 2015) of more than 400 university students from the U.S., Japan, Germany, Slovakia, and India. Participants were enrolled in classes taught by colleagues, or they were classmates of one of my research assistants. Everyone was between age 18 and 26 (mean age: 21). About two-thirds were female and one-third male. (For study details, see Baron, Calixte, & Havewala, 2017.) Though my study participants were university students, I suspect that most issues at play are relevant for younger readers who have mastered the skills we would expect of middle-school students and above. Use of digital technologies is now ubiquitous among both adolescents and young adults, and teachers at all levels are increasingly assigning e-books (or online articles) rather than print.

The study consisted of three sets of questions. In the first set, we asked students:

  • How much time they spent reading in print versus onscreen;• Whether cost was a factor in their choice of reading platform;
  • In which medium they were more likely to reread;
  • Whether text length influenced their platform choice;
  • How likely they were to multitask when reading in each medium; and
  • In which medium they felt they concentrated best.

In the next set, we asked what students liked most — and least — about reading in each medium. Finally, we gave participants the opportunity to offer additional comments.

15pdk_99_2_tbl1

Print versus digital reading habits

Here are the main takeaways of what students in the study reported in the first set of questions about their reading habits:

Time reading in print versus onscreen

Overall, participants reported spending about two-thirds of their time reading in print, both for schoolwork and pleasure. There was consider-able variation across countries, with the Japanese doing the most reading onscreen. In considering these numbers, especially for academic reading, we need to keep in mind that sometimes reading assignments are only available in one medium or the other, so students are not making independent choices.

More than four-fifths of the participants said that if cost were the same, they would choose to read in print rather than onscreen. This finding was particularly strong for academic reading and especially high in Germany (94%). Students (and for that matter, K-12 school systems) often cite cost as the reason for selecting digital rather than print textbooks. It’s therefore telling that if cost is removed from the equation, digital millennials commonly prefer print.

Not everyone in the study reread — either for schoolwork or for pleasure. Among those who did, six out of ten indicated they were more likely to reread print. Fewer than two out of ten choose digital, while the rest said both media were equally likely. Rereading is relevant to the issue of attention since a second reading offers opportunities for review or reflection.

Text length

When the amount of text is short, participants displayed mixed preferences, both when reading academic works or for pleasure. However, with longer texts, more than 86% preferred print for schoolwork and 78% when reading for pleasure. Preference for reading longer works in print has been reported in multiple studies. As Farinosi and colleagues (2016) observed, “If the text requires strategic reading, such as papers, essays, books, the paper version is preferred” (p. 417).

Multitasking

Students reported being more likely to multitask when reading onscreen than in print. Responses from the U.S. participants were particularly stark, with 85% indicating they multitasked when reading digitally, compared with 26% for print. The detrimental cognitive effects of multitasking are well known (e.g., Carrier et al., 2015). We can reasonably infer that students who multitask while reading are less likely to be paying close attention to the text than those who don’t.

Concentration

The most dramatic finding for this set of questions came in response to the query about the platform on which students felt they concentrated best. Selecting from print, computer, tablet, e-reader, or mobile phone, 92% said it was easiest to concentrate when reading print.

Paying attention to reading

Students provided open-ended comments to the second set of questions, which asked what they liked most and least about reading in print and onscreen. In these responses, students praised the physicality of print but grumbled that it was not easily searchable. They complained that reading onscreen gave them eyestrain but enjoyed its convenience.

They also had telling things to say about the cognitive consequences of reading in hardcopy versus onscreen. Of all the “like least” comments about reading digitally, 21% were cognitive in nature. Nearly all these comments talked about perceived distraction or lack of concentration. U.S. students were especially vocal: Nearly 43% of their “like least” comments about reading digitally concerned distraction or lack of concentration. When asked what they “liked most” about reading in print, respondents said, “It’s easier to focus,” I “feel like the content sticks in the head more easily,” “reading in hardcopy makes me focus more on what I am reading,” and “I feel like I understand it more [when reading in print].”

In their additional comments (the last question category), study participants wrote about how long it takes to read the same length text on the two platforms. One student observed, “It takes more time to read the same number of pages in print comparing to digital,” suggesting that the mindset she brings to reading print involves greater (and more time-consuming) attention than the one she brings to reading digitally. In fact, in an earlier pilot study, one student griped that what she “liked least” about reading hardcopy was that “it takes me longer because I read more carefully.”

Unexpectedly, several students said reading in print was boring. In response to the question of what they “liked least” about reading in print, one participant complained that “It becomes boring sometimes,” while another wrote, “it takes time to sit down and focus on the material.” Common sense suggests that if students anticipate that text in print will be boring, they will likely approach it with reduced enthusiasm. Diminished interest sometimes translates into skimming rather than reading carefully and sometimes not doing the assigned reading at all.

Is the nature of reading changing?

The biggest challenge to reading attentively on digital platforms is that we largely use digital devices for quick action: Look up an address, send a Facebook status update, grab the news headlines (but not the meat of the article), multitask between online shopping and writing an essay. When we go to read something substantive on a laptop or e-reader, tablet, or mobile phone, our now-habitualized instincts tell us to move things along.

Coupled with this mindset is an evolving sense that writing is for the here-and-now, not the long haul. Since written communication first emerged (in different places, under different circumstances, at different times), one of its consistent attributes has been that it is a durable form of communication that one we can reread or refer to. Today, a nexus of forces is making writing seem more ephemeral.

A recent Pew Research Center study of news-reading habits (Mitchell et al., 2016) reported that among 18- to 29-year-olds, 50% said they often got news online, compared with only 5% who read print newspapers. While some of us save print news clippings, few archive their online versions. Vast numbers of students choose to rent textbooks (whether digitally or in print), which means the book is out of sight and not available for future consultation after the semester ends. True, K-12 students have long been giving back their print books at the end of the year, and college students have commonly sold books they don’t wish to keep. But my conversations now with students who are dedicated readers indicate they don’t see their college years as the time to start building a personal library.

If cost is removed from the equation, digital millennials commonly prefer print.

What about public or school libraries? Increasingly, budgets are being shifted from print to digital materials. The three primary motivations are space, cost, and convenience. To grow the collection, you don’t need to build another wing. Digital is (commonly) less expensive. And users can access the collection any time of day and anywhere in the world with only an internet connection.

All true. But there are consequences. When I access a library book digitally, I find myself “using” it, not reading it. I make a quick foray to find, for instance, the reference I need for an article I’m writing, and then I exit. Had I held the physical book in my hand, it might have taken longer to find the reference, but I probably would have read entire paragraphs or chapters. Microsoft researcher Abigail Sellen has made a related observation. In studying how people perceive material they read (or store) online, she says they “think of using an e-book, not owning an e-book” (cited in Jabr, 2013).

Savvy students are aware of how the computer FIND function lets them zero in on a specific word or phrase so as to answer a question they have been asked to write about, blithely dismissing the obligation to actually read the full assigned text. Using, not reading. The more we swap physical books for digital ones, the easier it is for students to swoop down and cherry-pick rather than work their way through an argument or story.

Finally, contemporary digital technology is altering the role of reading in education. Film strips of old have been replaced by far more engaging (and educationally enriching) TED Talks and YouTubes, podcasts and audio books. The potential of these digital media is extraordinary, both because of their educational richness and the democratic access they provide. Yet at the same time, we should be figuring out the right curricular balance of video, audio, and textual materials.

Implications for educators

The most important lesson I have learned from my research on reading in print versus digitally is the value of asking users themselves what they like and don’t like — and why — about reading in each medium. Students are acutely aware of the cognitive tradeoffs that many perceive themselves to be making when reading on one platform rather than the other. The issue is not that digital reading necessarily leads us to pay less attention. Rather, it is that digital technologies make it easy (and in a sense encourage us) to approach text with a different mindset than the one most of us have been trained to use while reading print.

We need to ask ourselves how the digital mindset is reshaping students’ (and our own) understanding of what it means to read. Since online technology is tailor-made for searching for information rather than analyzing complex ideas, will the meaning of “reading” become “finding information” rather than “contemplating and understanding”? Moreover, if print is increasingly seen as boring (compared with digital text), will our attention spans while reading print generally diminish?

Conceivably, we might progressively abandon careful reading in favor of what has been called “hyper reading” — in the words of Katherine Hayles (2012), reading that aims “to conserve attention by quickly identifying relevant information so that only relatively few portions of a given text are actually read” (p. 12). To be fair, even academics seem to be taking less time per scholarly article, particularly online articles, than they used to (Tenopir et al., 2009). When it comes to using web sites, studies indicate (Nielsen, 2008) that on average, people are likely reading less than 30% of the words.

The issue of sustained attention extends beyond reading onscreen to other digital media. Patricia Greenfield (2009) has observed that while television, video games, and the internet may foster visual intelligence, “the cost seems to be deep processing: mindful knowledge acquisition, inductive analysis, critical thinking, imagination, and reflection.”

Returning to the physical properties of print: If fewer young adults are building their own book collections and if libraries are increasingly going digital, will writing no longer be seen as a durable medium? Yes, we could always look up something again on a digital device, but do we? If audio and video are gradually supplanting text as sources of education and personal enrichment, how should we think about the future role of text as a vehicle of cultural dissemination?

Digital technology is still in its relative infancy. We know it can be an incredibly useful educational tool, but we need much more research before we can draw firm conclusions about its positive and negative features. In the case of reading, our first task is to make ourselves aware of the effect technology potentially has on how we wrap our minds around the written word when encountered in print versus onscreen. Our second task is to embed that understanding in our larger thinking about the role of writing as a means of communicating and thinking.

Ackerman, R. & Goldsmith, M. (2011). Metacognitive regulation of text learning: On screen versus on paper. Journal of Experimental Psychology: Applied, 17 (1), 18-32.

Baron, N.S. (2013). Redefining reading: The impact of digital communication media. PMLA , 128 (1), 193-200.

Baron, N.S. (2015). Words onscreen: The fate of reading in a digital world. New York, NY: Oxford.

Baron, N.S., Calixte, R.M., & Havewala, M. (2017). The persistence of print among university students: An exploratory study. Telematics & Informatics, 34, 590-604.

Carrier, L.M., Rosen, L.D., Cheever, N.A., & Lim, A.F. (2015). Causes, effects, and practicalities of everyday multitasking. Developmental Review, 35, 64-78.

Dyson, M.C. & Haselgrove, M. (2000). The effects of reading speed and reading patterns on the understanding of text read from screen. Journal of Research in Reading , 23 (2), 210-223.

Farinosi, M., Lim, C., & Roll, J. (2016). Book or screen, pen or keyboard? A cross-cultural sociological analysis of writing and reading habits basing on Germany, Italy, and the UK. Telematics and Informatics, 33 (2), 410-421.

Greenfield, P. M. (2009). Technology and informal education: What is taught, what is learned? Science, 232 (5910), 69-71.

Hayles, K. (2012). How we think: Digital media and contemporary technogenesis. Chicago, IL: University of Chicago.

Jabr, F. (2013, April 11). The reading brain in the digital age: The science of paper versus screens. Scientific American.

Kaufman, G. & Flanagan, M. (2016). High-low split: Divergent cognitive construal levels triggered by digital and nondigital platforms. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. New York, NY: ACM, pp. 2773-2777.

Mangen, A., Walgermo, B.R., & Brønnick, K. (2013). Reading linear texts on paper versus computer screen: Effects on reading comprehension. International Journal of Educational Research, 58, 61-68.

Mitchell, A., Gottfried, J., Barthel, M., & Shearer, E. (2016, July 7). The modern news consumer: News attitudes and practices in the digital age. New York, NY: Pew Research Center. www.journalism.org/2016/07/07/the-modern-news-consumer

Nielsen, J. (2008, May 6). How little do users read? Fremont, CA: Nielsen Norman Group. www.nngroup.com/articles/how-little-do-users-read/

Paul, A.M. (2014, April 10). Students reading e-books are losing out, study suggests. New York Times.

Schugar, J.T., Schugar, H., & Penny, C. (2011). A Nook or a book? Comparing college students’ reading comprehension levels, critical reading, and study skills. International Journal of Technology in Teaching and Learning, 7 (2), 174-192.

Schugar, H.R., Smith, C.A., & Schugar, J.T. (2013). Teaching with interactive e-books in grades K-6. The Reading Teacher, 66 (8), 615-624.

Tenopir, C., King, D.W., Edwards, S., & Wu, L. (2009). Electronic journals and changes in scholarly article seeking and reading patterns. Aslib Proceedings: New Information Perspective, 61(1), 5-32.

Citation: Baron, N.S. (2017). Reading in a digital age.  Phi Delta Kappan  99 (2), 15-20.

ABOUT THE AUTHOR

default profile picture

Naomi S. Baron

NAOMI S. BARON is a professor of linguistics, Department of World Languages and Cultures, American University, Washington, D.C.

Related Posts

Stages of immigrant parent involvement — survivors to leaders

Stages of immigrant parent involvement — survivors to leaders

December 1, 2015

A teaching makeover improves learning for diverse students

A teaching makeover improves learning for diverse students

February 1, 2016

The best of both worlds 

The best of both worlds 

November 3, 2016

An AI wishlist from school leaders

An AI wishlist from school leaders

April 29, 2024

Recent Posts

Holding solutions journalism accountable

University of Illinois Urbana-Champaign Logo

  • LOGIN & Help

Case Study Research in the Digital Age

Research output : Book/Report/Conference proceeding › Book

Original languageEnglish (US)
DOIs
StatePublished - Jan 11 2024

Online availability

  • 10.4324/9781003402169

Library availability

Related links.

  • https://doi.org/10.4324/9781003402169

T1 - Case Study Research in the Digital Age

AU - Gallagher, John R.

PY - 2024/1/11

Y1 - 2024/1/11

UR - https://doi.org/10.4324/9781003402169

U2 - 10.4324/9781003402169

DO - 10.4324/9781003402169

BT - Case Study Research in the Digital Age

Fashion, Intellectual Property and Freedom of Artistic Expression in the Age of Metaverse and AI: A Digital Constitutionalist-Approach

European Intellectual Property Review, No. 9 (forthcoming, 2024)

32 Pages Posted: 19 Aug 2024

Vincenzo Iaia

Luiss Guido Carli University - Department of Law

Christophe Geiger

Luiss Guido Carli University

Date Written: August 03, 2024

The fashion industry boasts one of the largest market shares for e-commerce. Statical projections indicate that the global fashion e-commerce market will benefit from continuous growth. In this context, intellectual property rights ('IPRs'), mainly trademarks, copyright and designs, have a role to play when protecting fashion items. However, they are also increasingly confronted by new creative practices made possible in the digital environment, fueled by advancements in frontier technologies, such as blockchain and artificial intelligence. The trait d'union of these conducts is the artistic (re)use of the IP protected fashion works without the consent of rightsholders. Courts are tasked with resolving the tensions between creators invoking freedom of artistic expression (Arts. 11 and 13 EUCF) and fashion houses seeking exclusive control over their creative works (Art. 17.2 EUCF). By way of drawing a parallel between trademark, designs and copyright litigations in the fashion industry, this article introduces a matrix containing some interpretative coordinates also grounded in the doctrine of digital constitutionalism to address the clashes between those fundamental rights which occur in the virtual fashion world(s) too.

Keywords: Fashion Law, Copyright Law, Trademark Law, Freedom of Expression, Digital Constitutionalism, Design Law, Artificial Intelligence, Metaverse

Suggested Citation: Suggested Citation

Vincenzo Iaia (Contact Author)

Luiss guido carli university - department of law ( email ).

Via Parenzo 11 Rome, 00198 Italy

Luiss Guido Carli University ( email )

Department of Law, Viale Pola 12 Rome, Roma 00198 Italy

HOME PAGE: http://www.luiss.edu/faculty/353993

Do you have a job opening that you would like to promote on SSRN?

Paper statistics, related ejournals, entrepreneurship & law ejournal.

Subscribe to this fee journal for more curated articles on this topic

IO: Productivity, Innovation & Technology eJournal

Law & culture ejournal, economics of innovation ejournal, artificial intelligence - law, policy, & ethics ejournal, environment for innovation ejournal, art law ejournal.

MIT Technology Review

  • Newsletters

The race to save our online lives from a digital dark age

We’re making more data than ever. What can—and should—we save for future generations? And will they be able to understand it?

  • Niall Firth archive page

""

There is a photo of my daughter that I love. She is sitting, smiling, in our old back garden, chubby hands grabbing at the cool grass. It was taken in 2013, when she was almost one, on an aging Samsung digital camera. I originally stored it on a laptop before transferring it to a chunky external hard drive.

A few years later, I uploaded it to Google Photos. When I search for the word ”grass,” Google’s algorithm pulls it up. It always makes me smile.

I pay Google £1.79 a month to keep my memories safe. That’s a lot of trust I’m putting in a company that’s existed for only 26 years. But the hassle it removes seems worth it. There’s just so much stuff nowadays. The admin required to keep it updated and stored safely is just too onerous.

My parents didn’t have this problem. They took occasional photos of me on a film camera and periodically printed them out on paper and put them in a photo album. These pictures are still viewable now, 40-odd years later, on faded yellowing photo paper—a few frames per year. 

Many of my memories from the following decades are also fixed on paper. The letters I received from my friends when traveling abroad in my 20s were handwritten on lined paper. I still have them crammed in a shoebox, an amusing but relatively small archive of an offline time.

We no longer have such space limitations. My iPhone takes thousands of photos a year. Our Instagram and TikTok feeds are constantly updated. We collectively send billions of WhatsApp messages and texts and emails and tweets.

But while all this data is plentiful, it’s also more ephemeral. One day in the maybe-not-so-distant future, YouTube won’t exist and its videos may be lost forever. Facebook—and your uncle’s holiday posts—will vanish. There is precedent for this. MySpace, the first largish-scale social network, deleted every photo, video, and audio file uploaded to it before 2016, seemingly inadvertently. Entire tranches of Usenet newsgroups, home to some of the internet’s earliest conversations, have gone offline forever and vanished from history. And in June this year, more than 20 years of music journalism disappeared when the MTV News archives were taken offline.

For many archivists, alarm bells are ringing. Across the world, they are scraping up defunct websites or at-risk data collections to save as much of our digital lives as possible. Others are working on ways to store that data in formats that will last hundreds, perhaps even thousands, of years. 

The endeavor raises complex questions. What is important to us? How and why do we decide what to keep—and what do we let go? 

And how will future generations make sense of what we’re able to save?

“Welcome to the challenge of every historian, archaeologist, novelist,” says Genevieve Bell, a cultural anthropologist. “How do you make sense of what’s left? And then how do you avoid reading it through the lens of the now?”

Last-chance saloon

There is more stuff being created now than at any time in history. At Google’s I/O conference this year, the firm’s CEO, Sundar Pichai, said that 6 billion photos and videos are uploaded to Google Photos every day. More than 40 million WhatsApp messages are sent every minute .

Even with so much more of it, though, our data is more fragile than ever. Books could burn in a freak library fire, but data is much easier to wipe forever. We’ve seen it happen—not only in incidents like the accidental deletion of MySpace data but also, sometimes, with intent. 

In 2009, Yahoo announced it was going to pull the plug on the web-hosting platform GeoCities, putting millions of carefully created web pages on the chopping block. While most of these pages might seem inconsequential—GeoCities was famous for its amateurish, early-web aesthetic and its pages dedicated to various collections, obsessions, or fandoms—they represented an early chapter of the web, and one that was about to be lost forever.

And it would have been, if a ragtag group of volunteer archivists led by Jason Scott hadn’t stepped in. 

“We sprang into action, and part of the fury and confusion of the time was we were going from downloading a handful of interesting sites to suddenly taking on an anchoring website of the early web,” Scott recalls.

His group, called Archive Team, quickly mobilized and downloaded as many GeoCities pages as possible before it closed for good. He and the team ended up being able to save most of the site, archiving millions of pages between April and October 2009. He estimates that they managed to download and store around a terabyte, but he notes that the size of GeoCities waxed and waned and was around nine terabytes at its peak. Much was likely gone for good. “It contained 100% user-generated works, folk art, and honest examples of human beings writing information and histories that were nowhere else,” he says.

Known for his top hat and cyberpunk-infused sense of style, Scott has made it his life’s mission to help save parts of the web that are at risk of being lost. “It is becoming more understood that archives, archiving, and preservation are a choice, a duty, and not something that just happens like the tides,” he says.

Scott now works as “free-range archivist and software curator” with the Internet Archive, an online library started in 1996 by the internet pioneer Brewster Kahle to save and store information that would otherwise be lost. 

As a society, we’re creating so much new stuff that we must always delete more things than we did the year before.

Over the past two decades, the Internet Archive has amassed a gigantic library of material scraped from around the web, including that GeoCities content. It doesn’t just save purely digital artifacts, either; it also has a vast collection of digitized books that it has scanned and rescued. Since it began, the Internet Archive has collected more than 145 petabytes of data, including more than 95 million public media files such as movies, images, and texts. It has managed to save almost half a million MTV news pages.

Its Wayback Machine, which lets users rewind to see how certain websites looked at any point in time, has more than 800 billion web pages stored and captures a further 650 million each day. It also records and stores TV channels from around the world and even saves TikToks and YouTube videos. They are all stored across multiple data centers that the Internet Archive owns itself.

It’s a Sisyphean task. As a society, we’re creating so much new stuff that we must always delete more things than we did the year before, says Jack Cushman, director at Harvard’s Library Innovation Lab, where he helps libraries and technologists learn from one another. We “have to figure out what gets saved and what doesn’t,” he says. “And how do we decide?”  

""

Archivists have to make such decisions constantly. Which TikToks should we save for posterity, for example?

We shouldn’t try too hard to imagine what future historians would find interesting about us, says Niels Brügger, an internet researcher at Aarhus University in Denmark. “We cannot imagine what historians in 30 years’ time would like to study about today, because we don’t have a clue,” he says. “So we shouldn’t try to anticipate and sort of constrain the possible questions that future historians would ask.”

Instead, Brügger says, we should just save as much stuff as possible and let them figure it out later. “As a historian, I would definitely go for: Get it all, and then historians will find out what the hell they’re going to do with it,” he says.

At the Internet Archive, it’s the stuff most at risk of being lost that gets prioritized, says Jefferson Bailey, who works there helping develop archiving software for libraries and institutions. “Material that is ephemeral or at risk or has not yet been digitized and therefore is more easily destroyed, because it’s in analog or print format—those do get priority,” he says. 

People can request that pages be archived. Libraries and institutions also make nominations. And the staff sorts out the rest. Across open social media like TikTok and YouTube, archive teams at libraries around the world select certain accounts, copy what they want to save, and share those copies with the Internet Archive. It could be snapshots of what was trending each day, as well as tweets or videos from accounts run by notable individuals such as the US president.

The process can’t capture everything, but it offers a pretty good slice of what has preoccupied us in the early decades of the 21st century. While historical records have typically relied upon the private letters and belongings of society’s richest, an archive process that scrapes tweets is always going to be a bit more egalitarian.

“You can get a very interesting and diverse snapshot of our cultural moments of the last 30, 40 years,” says Bailey. “That is very different from what a traditional archive looked like 100 years ago.” 

As citizens, we could also help future historians. Brügger suggests people could make “data donations” of their personal correspondence to archives. “One week per year, invite everyone to donate the emails from that week,” he says. “If you had these time slices of email correspondence from thousands of people, year by year, that would be really great.”

Scott imagines future historians eventually using AI to query these archives to gain a unique insight into how we lived. “You’ll be able to ask a machine: ‘Could you show me images of people enjoying themselves at amusement parks with their families from the ’60s?’ and it will go, ‘Here you go,’” he says. “The work we did up to here was done in faith that something like this might exist.”

The past guides the future

Human knowledge doesn’t always disappear with a dramatic flourish like GeoCities; sometimes it is erased gradually. You don’t know something’s gone until you go back to check it. One example of this is “link rot,” where hyperlinks on the web no longer direct you to the right target, leaving you with broken pages and dead ends. A Pew Research Center study from May 2024 found that 23% of web pages that were around in 2013 are no longer accessible.

It’s not just web links that die without constant curation and care. Unlike paper, the formats that now store most of our data require certain software or hardware to run. And these tools can become obsolete quickly. Many of our files can no longer be read because the applications that read them are gone or the data has become corrupted, for example.

One way to mitigate this problem is to transfer important data to the latest medium on a regular basis, before the programs required to read it are lost forever. At the Internet Archive and other libraries, the way information is stored is refreshed every few years. But for data that is not being actively looked after, it may be only a few years before the hardware required to access it is no longer available. Think about once ubiquitous storage mediums like Zip drives or CompactFlash. 

Some researchers are looking into ways to make sure we can always access old digital formats, even if the kit required to read them has become a museum piece. The Olive project , run by Mahadev Satyanarayanan at Carnegie Mellon University, aims to make it possible for anyone to use any application, however old, “with just a click.” His team has been working since 2012 to create a huge, decentralized network that supports “virtual machines”—emulators for old or defunct operating systems and all the software that they run.

Keeping old data alive like this is a way to protect against what the computer scientist Danny Hillis once dubbed the “digital dark age,” a nod to the early medieval period when a lack of written material left future historians little to go on.

Hillis, an MIT alum who pioneered parallel computing, thinks the rapid technological upheaval of our time will leave much of what we’re living through a mystery to scholars. 

“As I get older, I keep thinking, how can I be a good ancestor?” Vint Cerf, one of the internet’s founders

“When people look back at this period, they’ll say, ‘Oh, well, you know, here was this sort of incomprehensibly fast technological change, and a lot of history got lost during that change,” he says.

Hillis was one of the founders (along with Brian Eno and Stewart Brand) of the Long Now Foundation , a San Francisco–based organization that is known for its eye-catching art/science projects such as the Clock of the Long Now, a Jeff Bezos–funded gigantic mechanical clock currently under construction in a mountain in West Texas that is designed to keep accurate time for 10,000 years. It also created the Rosetta Disc, a circle of nickel that has been etched at microscopic scale with documentation for around 1,500 of the world’s languages. In February, a copy of the disc touched down on the moon aboard the Odysseus lander. Part of the Long Now’s focus is to help people think about how we protect our history for future generations. It’s not just about making life easier for historians. It’s about helping us be “better ancestors,” according to the organization’s mission statement.  

It’s a sentiment that chimes with Vint Cerf, one of the internet’s founders. “As I get older, I keep thinking, how can I be a good ancestor?” he says.

“An understanding of what has happened in the past is helpful for anticipating or interpreting what’s happening in the present and what might happen in the future,” says Cerf. There are “all kinds of scenarios where the absence of knowledge of the past is a debilitating weakness for a society.” 

“If we don’t remember, we can’t think, and the way that society remembers is by writing things down and putting them in libraries,” agrees Kahle. Without such repositories, he says, “people will be confused as to what’s true and not true.”

Kahle started the Internet Archive as a way to make sure all knowledge is free for anyone, but he feels the balance of power has tilted away from libraries and toward corporations. And that is likely to be a problem for keeping things accessible in the long term.

“If it’s left up to the corporations, it’s all gone,” he says. “Not only are we talking about classic published works—like your magazine, or books—but we’re talking about Facebook pages, Twitter pages, your personal blogs. All of those in general are on corporate platforms now. And those will all disappear.”

Losing our long-term digital archives has real implications for how society runs, says Harvard’s Cushman, who points out that our legal decisions and paperwork are largely stored digitally. Without a permanent, unalterable record, we can no longer rely on past judgments to inform the present. His team has created ways to let courts and law journals put copies of web pages on file at the Harvard Law Library, where they are stored indefinitely as a record of legal precedent. It’s also creating tools to let people interact with these archives by scrolling through historical versions of a site, or by using a custom GPT to interact with collections.

Many other groups are working on similar solutions. The US Library of Congress has suggested standards for storing video, audio, and web files so they are accessible for future generations. It urges archivists to think about issues such as whether the data includes instructions on how to access it, or how widely adopted the format has been (the idea being that a more prevalent one is less likely to become obsolete quickly).

But ultimately, digital archives are harder to keep than physical archives, says Cushman. “If you run out of budget and leave books in a quiet, dark room for 10 years, they’re happy,” he says. “If you fail to pay your AWS bill for a month, your files are gone forever.”

Storage for impossible time scales

Even the physical way we store digital data is impermanent. Most long-term storage in data centers—for use in disaster recovery, among other applications—is on magnetic hard drives or tape. Hard drives wear out after a few years. Tape is a little better, but it still doesn’t get you much beyond a decade or so of storage use before it begins to fail. 

Companies make new backups all the time, so this is less of a problem for the short-to-medium term. But when you want to store important cultural, legal, or historical information for the ages, you need to think differently. You need something that can store huge amounts of data but can also withstand the test of time and doesn’t need constant care. 

DNA has often been touted as a long-term storage option. It can store astonishing amounts of information and is incredibly long-lasting. Pieces of bone contain readable DNA from many hundreds of thousands of years ago. But encoding information in DNA is currently expensive and slow, and specialized equipment is required to “read” the information back later. That makes it impractical as a serious long-term backup for our world’s knowledge, at least for now.

""

Luckily, there are already a handful of compelling alternatives. One of the most advanced ideas is Project Silica, currently under development at Microsoft Research in Cambridge, UK, where Richard Black and his team are creating a new form of long-term storage on glass squares that can last hundreds or even thousands of years.

Each one is created using a precise, powerful laser, which writes nanoscale deformations into the glass beneath the surface that can encode bits of information. These tiny imperfections are layered up on top of one another in the glass and are then read using a powerful microscope that can detect the way light is refracted and polarized. Machine learning is used to decode the bits, and each square has enough training data to let future historians retrain a model from scratch if required, says Black. 

When I hold one of the Silica squares in my hand, it feels pleasingly sci-fi, as if I’ve just pulled it out to shut down HAL in 2001: A Space Odyssey . The encoded data is visible as a faint blue where the light hits the imperfections and scatters. A video shared by Microsoft shows these squares being microwaved, boiled, baked in an oven, and zapped with a high-powered magnet, all with no apparent ill effects.

Black imagines Silica being used to store long-term scientific archives, such as medical information or weather data, over decades. Crucially, the technology can create archives that can be air-gapped (cut off from the internet) and need no power or special care. They can just be locked away in a silo and should work fine and be readable centuries from now. “Humanity has never stopped building microscopes,” says Black. In 2019 Warner Bros. archived some of its back catalogue on Silica glass, including the 1978 classic Superman . 

Black’s team has also designed a library storage system for Silica. Shelves packed with thousands of the glass squares line a small room at the Cambridge office. Handbag-size robots attached to the shelves whiz along them and occasionally stop, unclip themselves from one shelf, and clamber up or down to another before shooting off again down the line. When they reach a specific spot, they stop and pluck one of the squares, no bigger than a CD, from the shelf. Its contents are read and the robot zips back into position.

Meanwhile, deep in the vaults of an abandoned mine in Svalbard, Norway, GitHub is storing some of history’s most important software (including the source code for Linux, Android, and Python) on special film its creators claim can last for more than 500 years. The film, made by the firm Piql, is coated in microscopic silver halide crystals that permanently darken when exposed to light. A high-powered light source is used to create dark pixels just six micrometers across, which encode binary data. A scanner then reads the data back. Instructions for how to access the information are written in English on each roll, in case there is no longer anyone around to explain how it works. 

In addition to GitHub’s collection, the storage facility, known as the Arctic World Archive, also includes data supplied by the Vatican and the European Space Agency, as well as various artworks and images from governments and institutions around the world. Yale University, for example, has stored a collection of software, including Microsoft Office and Adobe, as Piql data. Just a few hundred meters down the road you find the Svalbard Global Seed Vault, a storage facility preserving a selection of the world’s biodiversity for future generations. Data about what each seed container holds is also stored on Piql film.

Making sure this information is stored in formats that can be decoded hundreds of years from now will be crucial. As Cushman points out, we still argue over the proper way to play Charlie Chaplin films because the intended playback speed was never recorded. “When researchers are trying to access these materials decades in the future, how expensive will it be to build tools to display them, and what will be the chances that we get it wrong?” he asks.

Ultimately, the motivation for all these projects is the idea that they will act as humanity’s backup. A long-term medium that will withstand an apocalypse, an electromagnetic pulse from the sun, the end of civilization, and let us start again. 

Something to let people know we were here.

Happy accidents

Sometime in the first century, a Roman woman called Claudia Severa was planning a big birthday party at a fort in northern England. She asked her servant to write out an invitation to one of her best friends on a wooden tablet and then signed it with a flourish. 

Claudia could never have suspected that, almost 2,000 years on, the Vindolanda Tablets (of which her invitation is the most famous) would be used to give us a unique insight into the daily lives of Romans in England at that time.

That’s always the way. Throughout history, the oddest, most random things survived to act as a guide for historians. The same will go for us. Despite the efforts of archivists, librarians, and storage researchers, it’s impossible to know for sure what data will still be accessible when we’re long gone. And we might be surprised at what they find interesting when they come across it. Which batch of archived emails or TikToks will be the key to unlocking our era for future historians and anthropologists? And what will they think of us?

Historians foraging through our digital detritus may be left with a series of unanswerable questions, and they’ll just have to make best guesses.  

Throughout history, the oddest, most random things survived to act as a guide for historians. The same will go for us.

“You’d need to ask about who had digital technology,” says Bell. “And how did they power it? And who got to make choices about it? And how was it stored and circulated? And who saw it?”

We don’t know what will still be running 20, 50, or 100 years from now. Perhaps Google Photos’ cloud storage will have been abandoned, a giant garbage pile of old hard drives buried in the ground. Or maybe, with luck, one of the spiritual heirs to Scott’s archivists will have saved it before it went down. 

Maybe someone downloaded it onto some sort of glass disc and stashed it in a vault somewhere.

Maybe some future anthropologist will one day find it, dust it off, and find that it’s still readable. 

Maybe they’ll select a file at random, spin up some sort of software emulator, and find a billion photos from 2013. 

And see a chubby, happy girl sitting in the grass.

baby sitting in grass

Supershoes are reshaping distance running

Kenyan runners, like many others, are grappling with the impact of expensive, high-performance shoes.

  • Jonathan W. Rosen archive page

the digital age essay

Happy birthday, baby! What the future holds for those born today

An intelligent digital agent could be a companion for life—and other predictions for the next 125 years.

  • Kara Platoni archive page

the digital age essay

Toys can change your life

What if your favorite childhood toys like balls, Frisbees, and jacks could predict the future?

  • Bill Gourgey archive page

hand holds the shape of an edited magazine represented by vector lines with a fragment of marked up text

Do you want to play a game?

There are fortunes to be had in making the things we play more appealing, more accessible, more fun.

  • Mat Honan archive page

Stay connected

Get the latest updates from mit technology review.

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.

Dan Bates, LMHC, LPC, NCC

Navigating Grief in the Digital Age

How technology is reshaping our mourning process..

Posted August 18, 2024 | Reviewed by Jessica Schrader

  • Understanding Grief
  • Take our Depression Test
  • Find counselling to heal from grief
  • Social media creates digital memorials allowing shared grief expression and continued bonds with the deceased.
  • Virtual funerals and online support groups expand access to grief rituals and mental health resources.
  • Digital legacies raise questions about posthumous online presence and the need for digital estate planning.

Cotton Bro/Pexels

In an age where our lives are increasingly wrapped up with technology, it's no surprise that even our most profound human experiences—like grieving—are being shaped and changed, for better or worse, by technology. From social media memorials to virtual funerals, technology is reshaping how we mourn, remember, and honor those we've lost.

Social Media as a Platform for Grief Expression

Social media platforms have become virtual spaces for collective mourning and remembrance. Facebook pages transform into digital memorials, where friends and family share memories, photos, and messages to the deceased. This practice aligns with the psychological concept of "continuing bonds," where the bereaved maintain a connection with the departed (Kasket, 2012). These digital spaces provide a sense of community and support, allowing grievers to express their emotions and share their loss with a wider network.

Digital Legacies and Posthumous Online Presence

As our digital footprints grow, so does the complexity of managing our online presence after death. Many social media platforms now offer options for account management after a user's passing, raising questions about digital estate planning. Brubaker et al. (2013) note that these digital remnants can serve as both comfort and distress to the bereaved, as they navigate the deceased's lingering online presence.

Virtual Funerals and Remote Participation in Grief Rituals

The COVID-19 pandemic accelerated the adoption of virtual funeral services, allowing for remote participation in grief rituals. Livestreamed services and online memorial gatherings have become increasingly common, providing opportunities for geographically dispersed mourners to come together. Additionally, online support groups and virtual grief counseling sessions have expanded access to mental health resources for those struggling with loss (Hård af Segerstad & Kasperowski, 2015).

The Double-Edged Sword of Digital Grief

While technology offers new avenues for support and remembrance, it also presents unique challenges. The constant reminders of the deceased on social media can potentially prolong the grieving process for some individuals. Privacy concerns arise as personal memories become public content, and the concept of digital immortality raises ethical questions about how long a person's online presence should persist (Myles & Millerand, 2016).

On the positive side, digital platforms provide unprecedented access to support networks and resources. They allow for the preservation of memories in rich, multimedia formats and offer new ways to honor and celebrate the lives of those we've lost.

The Future of Digital Grieving

As technology continues to evolve, so too will our grieving practices. Virtual reality (VR) may soon offer immersive experiences of visiting memorial sites or even interacting with digital avatars of the deceased. Artificial intelligence (AI) could potentially create more sophisticated digital legacies, raising complex ethical considerations about the nature of identity and memory after death (Öhman & Floridi, 2017).

Balancing Technology and Human Needs

As we navigate this new terrain of digital grief, it's crucial to remember that technology should complement, not replace, the fundamental human need for connection and support during times of loss. While digital tools can provide valuable resources and new ways to memorialize loved ones, they should be balanced with in-person support and traditional grieving practices that have served humanity for millennia.

Grief in the digital age offers both opportunities and challenges. It provides new avenues for expression, support, and remembrance, while also raising important questions about privacy, the longevity of digital legacies, and the nature of mourning itself. As we continue to integrate technology into our grieving processes, it's essential to approach these tools mindfully, using them to enhance our ability to cope with loss while still honoring the deeply personal and human experience of grief.

Brubaker, J. R., Hayes, G. R., & Dourish, P. (2013). Beyond the grave: Facebook as a site for the expansion of death and mourning. The Information Society, 29(3), 152-163. https://doi.org/10.1080/01972243.2013.777300

Hård af Segerstad, Y., & Kasperowski, D. (2015). A community for grieving: Affordances of social media for support of bereaved parents. New Review of Hypermedia and Multimedia, 21(1-2), 25-41. https://doi.org/10.1080/13614568.2014.983557

Kasket, E. (2012). Continuing bonds in the age of social networking: Facebook as a modern-day medium. Bereavement Care, 31(2), 62-69. https://doi.org/10.1080/02682621.2012.710493

Myles, D., & Millerand, F. (2016). Mourning in a 'sociotechnically' acceptable manner: A Facebook case study. In A. Hajek, C. Lohmeier, & C. Pentzold (Eds.), Memory in a Mediated World (pp. 229-243). Palgrave Macmillan.

Öhman, C., & Floridi, L. (2017). The political economy of death in the age of information: A critical approach to the digital afterlife industry. Minds and Machines, 27(4), 639-662. https://doi.org/10.1007/s11023-017-9445-2

Dan Bates, LMHC, LPC, NCC

Dan Bates, Ph.D., is a clinical mental health counselor licensed in the state of Washington and certified nationally.

  • Find a Therapist
  • Find a Treatment Center
  • Find a Psychiatrist
  • Find a Support Group
  • Find Online Therapy
  • International
  • New Zealand
  • South Africa
  • Switzerland
  • Asperger's
  • Bipolar Disorder
  • Chronic Pain
  • Eating Disorders
  • Passive Aggression
  • Personality
  • Goal Setting
  • Positive Psychology
  • Stopping Smoking
  • Low Sexual Desire
  • Relationships
  • Child Development
  • Self Tests NEW
  • Therapy Center
  • Diagnosis Dictionary
  • Types of Therapy

July 2024 magazine cover

Sticking up for yourself is no easy task. But there are concrete skills you can use to hone your assertiveness and advocate for yourself.

  • Emotional Intelligence
  • Gaslighting
  • Affective Forecasting
  • Neuroscience

IMAGES

  1. Possibility of ending the digital age Free Essay Example

    the digital age essay

  2. 📚 Digital Age: Revolutionizing Communication & Socialization

    the digital age essay

  3. 📚 Social Media: Uniting and Disseminating Info in the Digital Age

    the digital age essay

  4. ⇉The Digital Age in Music: How Advancements in Technology Are Re

    the digital age essay

  5. Marketing in digital age Essay Example

    the digital age essay

  6. Introduction to Digital Age

    the digital age essay

COMMENTS

  1. Opinion: A decade of positive change in the digital age

    Through major reforms, the world is now on track to reduce carbon emissions by 90% by the year 2050. The new digital age enabled billions of people to collaborate and mobilize to fight climate change. This included not just governments but businesses large and small, commuters, vacationers, employees, students, consumers - everyone - from ...

  2. The Digital Age Essay

    The Digital Age Essay. The digital age is staring us in the face from the near future. We already see countless instances of digital technology emerging more and more in our every day lives. Cell phones are equipped with voice recognition software, and are able to take photographs and send them wirelessly across the globe, almost instantaneously.

  3. Student Writing in the Digital Age

    The average college essay in 2006 was more than double the length of the average 1986 paper, which was itself much longer than the average length of papers written earlier in the century. In 1917, student papers averaged 162 words; in 1930, the average was 231 words. By 1986, the average grew to 422 words.

  4. What Is The Digital Age And What Does It Mean?

    The Digital Age. getty. As books like Dignity In The Digital Age (Simon & Schuster) 2022) by Congressman Ro Khanna begin to appear, as talk of "the digital age" becomes commonplace, and as ...

  5. How The Digital Age Is Reinventing (Almost) Everything

    How Digital Reinvents Everything. What we are seeing is that firms operating in the principles of digital age can take (almost) anything, that is slow, expensive, disagreeable, impersonal, or ...

  6. Privacy in the Digital Age

    Anonymity and the Internet. For many people, anonymity is one of the biggest worries as far as using the Internet is concerned. The virtual world may make it easier for dissidents to criticize governments, for alcoholics to talk about their problems and for shy people to find love. However, anonymity also creates room for dishonest people to ...

  7. The Digital Information Age: [Essay Example], 1090 words

    Words: 1090 | Pages: 2 | 6 min read. Published: Jun 5, 2019. The digital information age has been slowly but progressively coming to fruition over the past few decades. It has begun altering the fundamental aspects of how contemporary society functions and its effect is now more prevalent than ever. A new age in telecommunications has emerged.

  8. The Right to Privacy: Personal Freedom in the Digital Age: [Essay

    This essay explores the significance of the right to privacy, its historical development, contemporary challenges, and the ways in which society can protect this essential human right. Say no to plagiarism. ... Preserving the right to privacy in the digital age requires a multi-faceted approach: 1. Legal Protections:

  9. 7 Digital literacies and the skills of the digital age

    Digital literacies and the skills of the digital age. Oklahoma State University. Abstract - This chapter is intended to provide a framework and understanding of digital literacy, what it is and why it is important. The following pages explore the roots of digital literacy, its relationship to language literacy and its role in 21st century life.

  10. Writing History in the Digital Age

    Explore these questions in Writing History in the Digital Age, an open peer-reviewed volume published in open-access online format (for free) and in print (for sale) ... 20 essays, and new Conclusions (see contents below) 2011 Fall Open Peer Review of 28 essays, with 945 comments from readers and reviewers;

  11. Communication

    Communicating in the Digital Age is an article by Roshong (2019) dedicated to the problem of adaptation of communication to modern technologies. The author points out the dramatic changes in work and life that the digital revolution has incurred. However, people do not yet realize how distracting the world of endless notifications and ...

  12. The Digital Age Essay Examples

    The Digital Age Essays. Bridging the Advertising Gap: The Digital Age, Google's Advantage, and Future Trajectories. The world of advertising is dynamic, where resource allocation is determined by the constantly changing desires of consumers and the changing nature of media consumption. The fundamental element driving this change is the ...

  13. The Reading Brain in the Digital Age: Why Paper Still Beats Screens

    In the U.S., e-books currently make up more than 20 percent of all books sold to the general public. Despite all the increasingly user-friendly and popular technology, most studies published since ...

  14. Reading in a Digital Age

    Reading in a Digital Age. Notes on why the novel and the Internet are opposites, and why the latter both undermines the former and makes it more necessary. By Sven Birkerts | March 1, 2010. The nature of transition, how change works its way through a system, how people acclimate to the new—all these questions. So much of the change is driven ...

  15. Impact of the Digital Age

    The digital age refers to present time use of machines and computers to present information. The digital age had an overall impact on our societies and day to day activities. It has a lot of advantages and disadvantages i.e. it came with so many opportunities as well as costs. We are living in the age in which professionals in digital ...

  16. Report: What Does it Mean to be Human in the Digital Age?

    The ongoing convergence between science fiction and real-life technological advances makes it easy to imagine 'the digital age' as an era of conflict between human and artificial intelligences: of new technology as an alien tool to be managed and contained. ... Call for Papers. Vacancies. Past Opportunities. Home;

  17. The Reading Brain in the Digital Age: The Science of Paper versus

    People who took the test on a computer scored lower and reported higher levels of stress and tiredness than people who completed it on paper. In another set of experiments 82 volunteers completed ...

  18. The Digital Age: Transforming Social Development

    The rapid advancement of information and digital technologies has ushered in an era of unprecedented opportunity for social development. From telehealth to e-learning, digital tools are revolutionizing the way we deliver and access essential services. However, a new report by the UN Secretary-General emphasizes the immediate need to bridge the gap in access to technology and make sure that ...

  19. Data Privacy: Safeguarding Personal Information in the Digital Age

    In conclusion, data privacy is of utmost importance in our increasingly digital world. It safeguards individual autonomy, protects against cybercrime, fosters trust in the digital economy, preserves democracy, and allows for responsible innovation.

  20. Review essay: fake news, and online misinformation and disinformation

    Review essay: fake news, and online misinformation and disinformation Fake news: understanding media and misinformation in the digital age, edited by Melissa Zimdars and Kembrew McLeod, Cambridge, Mass. & London, The MIT Press, 2020, xl + 395 pp., US$38 (paperback), ISBN 978--262-53836-7; Lie machines, by Philip N. Howard, New Haven and Oxford, Yale University Press, 2020, xviii + 221 pp., £ ...

  21. Why do we still need public libraries in the digital age?

    After years of declining visitor numbers, libraries are experiencing a resurgence of interest and investment. The British Council's Tomas Doherty explains how libraries have adapted to new trends in how we read, work and socialise, as the British Council library in Dhaka reopens on 20 September. Technology has changed the way we consume media.

  22. Freedom of expression in the digital age: a historian's perspective

    Abstract. This essay surveys the history of freedom of expression from classical antiquity to the present. It contends that a principled defense of free expression dates to the seventeenth century, when it was championed by the political theorist John Locke. Free expression for Locke was closely linked with religious toleration, a relationship ...

  23. Reading in a digital age

    A recent Pew Research Center study of news-reading habits (Mitchell et al., 2016) reported that among 18- to 29-year-olds, 50% said they often got news online, compared with only 5% who read print newspapers. While some of us save print news clippings, few archive their online versions.

  24. Case Study Research in the Digital Age

    BT - Case Study Research in the Digital Age. ER - Powered by Pure, Scopus & Elsevier Fingerprint Engine ...

  25. Fashion, Intellectual Property and Freedom of Artistic ...

    Fashion, Intellectual Property and Freedom of Artistic Expression in the Age of Metaverse and AI: A Digital Constitutionalist-Approach. European Intellectual Property Review, No. 9 (forthcoming, 2024) 32 Pages Posted: 19 Aug 2024. ... PAPERS. 14,742. IO: Productivity, Innovation & Technology eJournal. Follow.

  26. The race to save our online lives from a digital dark age

    But ultimately, digital archives are harder to keep than physical archives, says Cushman. "If you run out of budget and leave books in a quiet, dark room for 10 years, they're happy," he says.

  27. Navigating Grief in the Digital Age

    Grief in the digital age offers both opportunities and challenges. It provides new avenues for expression, support, and remembrance, while also raising important questions about privacy, the ...