Fifty years later, IBM's inventors celebrate the 'Stretch'

Judged a failure at the time, Big Blue's 7030 supercomputer left a rich, albeit indirect legacy for the rest of the computer industry.

ibm stretch project failure case study

"All depends on your perspective," recalled an amused Fran Allen, not at all regretting her participation in a now-storied mid-1950s supercomputer project popularly known as "Stretch."

Even though IBM only built nine of the machines, Stretch left behind a legacy that remains a source of pride to the participants who were present at its creation.

"A lot of what went into that effort was later helpful to the rest of the industry," Allen said with the sort of understatement you'd expect from a former winner of the prestigious Turing Award. Fact is that Allen and the 300-some people who collaborated on Stretch invented many of the concepts that later became standard computer technologies. The short list includes multiprogramming, pipelining, memory protection, memory interleaving, and the eight-bit byte.

ibm stretch project failure case study

Many members of that original team, now grayer and more slow-of-gait than they were during the Eisenhower administration, filled an auditorium Thursday night at the Computer History Museum to reminisce and consider the legacy they bequeathed. Fred Brooks, who was a system planner for Stretch, and Harwood Kolsky, who worked on product planning, later joined Allen on stage for a panel discussion moderated by The New York Times reporter Steve Lohr.

In January 1956, work on the Stretch project formally got underway with the goal of building a supercomputer to replace IBM's 704 supercomputer. The resulting product, called the 7030, as the Stretch was officially known, could perform 100 billion computations a day and handle half a million instructions per second.

"We were about 300 people working in Poughkeepsie (New York)," said Kolsky. "Individual teams met frequently. That's why it's hard to tell who invented what. Generally, morale was high. You wouldn't know it by looking up here, but it was a young person's group...there were only two people over 40. Most members of the team were in their 20s and 30s."

But they were in for a shock. IBM's then-CEO, Thomas Watson, Jr. judged the 7030 to be a failure. Even though the machine was about 30 to 40 times faster than other systems, IBM won a bid submitted to Los Alamos Scientific Laboratory on its pledge to build a supercomputer that was at least 100 times faster than the 704.

When Stretch came along, IBM, which controlled about 70 percent of the computer market and about 90 percent of the punch-card business, was already fending off charges it exerted monopoly control. Brooks said that when Watson ordered the original price cut to $10 million, "that put it at under cost and violated antitrust...antitrust was a fact of everyday life in all our thinking."

In fact, IBM had signed off on a consent decree with the Justice Department in 1956. The company eventually shipped nine systems to customers around the world but then closed the production line forever.

Kolsky recalled the initial reaction to Watson's decision, saying that the project's engineers thought it served as a potential death knell to future supercomputer development.

ibm stretch project failure case study

Of course, it was anything but. In fact, the lessons learned from Stretch paved the way for the subsequent development of IBM's System/360, which turned out to be a smash success. Meanwhile, the innovations invented for Stretch subsequently entered the wider world of mainstream computing.

With the exception of industry cognoscenti--and the relative handful of folks responsible for engineering and managing the project--Stretch remains a footnote for most people. But maybe that's starting to change. To underscore the moment, IBM flew in one of its up-and-comers, senior VP of Development and Manufacturing Rod Adkins, to introduce the panel. Earlier in the evening, I sat down for a conversation with Adkins, who placed Stretch in its historical context.

"It's a pretty good model of a highly ambitious program that, at the time, was considered having not met its objectives," he said. "But when you look at some of the things that came out of that effort and how it's influenced the computer industry today, (Stretch) has had a profound, indirect, benefit to this industry."

Computer historians should also note the following: Stretch remained the most powerful computer in the world until 1964. Some failure.

  • Accessories
  • Entertainment
  • PCs & Components
  • Wi-Fi & Networks
  • Newsletters
  • Digital Magazine – Subscribe
  • Digital Magazine – Log in
  • Smart Answers
  • Best laptops
  • Best antivirus
  • Best monitors
  • Laptop deals
  • Desktop PC deals

When you purchase through links in our articles, we may earn a small commission. This doesn't affect our editorial independence .

Lessons Learned: IT’s Biggest Project Failures

Every year, the Improbable Research organization hands out Ig Nobel prizes to research projects that “first make people laugh, and then make them think.”

For example, this year’s Ig Nobel winners , announced last week, include a prize in nutrition to researchers who electronically modified the sound of a potato chip to make it appear crisper and fresher than it really is and a biology prize to researchers who determined that fleas that live on a dog jump higher than fleas that live on a cat. Last year, a team won for studying how sheets become wrinkled.

That got us thinking: Though the Ig Nobels haven’t given many awards to information technology (see No Prize for IT for reasons why), the history of information technology is littered with projects that have made people laugh — if you’re the type to find humor in other people’s expensive failures. But have they made us think? Maybe not so much. “IT projects have terrible track records. I just don’t get why people don’t learn,” says Mark Kozak-Holland, author of Titanic Lessons for IT Projects (that’s Titanic as in the ship, by the way).

When you look at the reasons for project failure, “it’s like a top 10 list that just repeats itself over and over again,” says Holland, who is also a senior business architect and consultant with HP Services . Feature creep? Insufficient training? Overlooking essential stakeholders? They’re all on the list — time and time again.

A popular management concept these days is “failing forward” — the idea that it’s OK to fail so long as you learn from your failures. In the spirit of that motto and of the Ig Nobel awards, Computerworld presents 11 IT projects that may have “failed” — in some cases, failed spectacularly — but from which the people involved were able to draw useful lessons.

You’ll notice that many of them are government projects. That’s not necessarily because government fails more often than the private sector, but because regulations and oversight make it harder for governments to cover up their mistakes. Private enterprise, on the other hand, is a bit better at making sure fewer people know of its failures.

So here, in chronological order, are Computerworld ‘s favorite IT boondoggles, our own Ig Nobels. Feel free to laugh at them — but try and learn something too.

IBM’s Stretch Project

In 1956, a group of computer scientists at IBM set out to build the world’s fastest supercomputer. Five years later, they produced the IBM 7030 — a.k.a. Stretch — the company’s first transistorized supercomputer, and delivered the first unit to the Los Alamos National Laboratory in 1961. Capable of handling a half-million instructions per second, Stretch was the fastest computer in the world and would remain so through 1964.

Nevertheless, the 7030 was considered a failure. IBM’s original bid to Los Alamos was to develop a computer 100 times faster than the system it was meant to replace, and the Stretch came in only 30 to 40 times faster. Because it failed to meet its goal, IBM had to drop Stretch’s price to $7.8 million from the planned $13.5 million, which meant the system was priced below cost. The company stopped offering the 7030 for sale, and only nine were ever built.

That wasn’t the end of the story, however. “A lot of what went into that effort was later helpful to the rest of the industry,” said Turing Award winner and Stretch team member Fran Allen at a recent event marking the project’s 50th anniversary. Stretch introduced pipelining, memory protection, memory interleaving and other technologies that have shaped the development of computers as we know them.

Don’t throw the baby out with the bathwater. Even if you don’t meet your project’s main goals, you may be able to salvage something of lasting value from the wreckage.

Knight-Ridder’s Viewtron Service

The Knight-Ridder media giant was right to think that the future of home information delivery would be via computer. Unfortunately, this insight came in the early 1980s, and the computer they had in mind was an expensive dedicated terminal.

Knight-Ridder launched its Viewtron version of videotex — the in-home information-retrieval service — in Florida in 1983 and extended it to other U.S. cities by 1985. The service offered banking, shopping, news and ads delivered over a custom terminal with color graphics capabilities beyond those of the typical PC of the time. But Viewtron never took off: It was meant to be the the “McDonald’s of videotex” and at the same time cater to upmarket consumers, according to a Knight-Ridder representative at the time who apparently didn’t notice the contradictions in that goal.

A Viewtron terminal cost $900 initially (the price was later dropped to $600 in an attempt to stimulate demand); by the time the company made the service available to anyone with a standard PC, videotex’s moment had passed.

Viewtron only attracted 20,000 subscribers, and by 1986, it had been canceled. But not before it cost Knight-Ridder $50 million. The New York Times business section wrote, with admirable understatement, that Viewtron “tried to offer too much to too many people who were not overly interested.”

Nevertheless, BusinessWeek concluded at the time, “Some of the nation’s largest media, technology and financial services companies … remain convinced that some day, everyday life will center on computer screens in the home.” Can you imagine?

Sometimes you can be so far ahead of the curve that you fall right off the edge.

DMV Projects — California and Washington

Two Western states spent the 1990s attempting to computerize their departments of motor vehicles, only to abandon the projects after spending millions of dollars. First was California, which in 1987 embarked on a five-year, $27 million plan to develop a system for keeping track of the state’s 31 million drivers’ licenses and 38 million vehicle registrations. But the state solicited a bid from just one company and awarded the contract to Tandem Computers. With Tandem supplying the software, the state was locked into buying Tandem hardware as well, and in 1990, it purchased six computers at a cost of $11.9 million.

That same year, however, tests showed that the new system was slower than the one it was designed to replace. The state forged ahead, but in 1994, it was finally forced to abandon what the San Francisco Chronicle described as “an unworkable system that could not be fixed without the expenditure of millions more.” In that May 1994 article, the Chronicle described it as a “failed $44 million computer project.” In an August article, it was described as a $49 million project, suggesting that the project continued to cost money even after it was shut down. A state audit later concluded that the DMV had “violated numerous contracting laws and regulations.”

Regulations are there for a reason, especially ones that keep you from doing things like placing your future in the hands of one supplier.

Meanwhile, the state of Washington was going through its own nightmare with its License Application Mitigation Project (LAMP). Begun in 1990, LAMP was supposed to cost $16 million over five years and automate the state’s vehicle registration and license renewal processes. By 1992, the projected cost had grown to $41.8 million; a year later, $51 million; by 1997, $67.5 million. Finally, it became apparent that not only was the cost of installing the system out of control, but it would also cost six times as much to run every year as the system it was replacing. Result: plug pulled, with $40 million spent for nothing.

When a project is obviously doomed to failure, get out sooner rather than later.

FoxMeyer ERP Program

In 1993, FoxMeyer Drugs was the fourth largest distributor of pharmaceuticals in the U.S., worth $5 billion. In an attempt to increase efficiency, FoxMeyer purchased an SAP system and a warehouse automation system and hired Andersen Consulting to integrate and implement the two in what was supposed to be a $35 million project. By 1996, the company was bankrupt; it was eventually sold to a competitor for a mere $80 million.

The reasons for the failure are familiar. First, FoxMeyer set up an unrealistically aggressive time line — the entire system was supposed to be implemented in 18 months. Second, the warehouse employees whose jobs were affected — more accurately, threatened — by the automated system were not supportive of the project, to say the least. After three existing warehouses were closed, the first warehouse to be automated was plagued by sabotage, with inventory damaged by workers and orders going unfilled.

Finally, the new system turned out to be less capable than the one it replaced: By 1994, the SAP system was processing only 10,000 orders a night, compared with 420,000 orders under the old mainframe. FoxMeyer also alleged that both Andersen and SAP used the automation project as a training tool for junior employees, rather than assigning their best workers to it.

In 1998, two years after filing for bankruptcy , FoxMeyer sued Andersen and SAP for $500 million each, claiming it had paid twice the estimate to get the system in a quarter of the intended sites. The suits were settled and/or dismissed in 2004.

No one plans to fail, but even so, make sure your operation can survive the failure of a project.

Apple’s Copland Operating System

It’s easy to forget these days just how desperate Apple Computer was during the 1990s. When Microsoft Windows 95 came out, it arrived with multitasking and dynamic memory allocation, neither of which was available in the existing Mac System 7. Copland was Apple’s attempt to develop a new operating system in-house; actually begun in 1994, the new OS was intended to be released as System 8 in 1996.

Copland’s development could be the poster child for feature creep. As the new OS came to dominate resource allocation within Apple, project managers began protecting their fiefdoms by pushing for their products to be incorporated into System 8. Apple did manage to get one developers’ release out in late 1996, but it was wildly unstable and did little to increase anyone’s confidence in the company.

Before another developer release could come out, Apple made the decision to cancel Copland and look outside for its new operating system; the outcome, of course, was the purchase of NeXT, which supplied the technology that became OS X.

Copland did not die in vain. Some of the technology seen in demos eventually turned up in OS X. And even before that, some Copland features wound up in System 8 and 9, including a multithreaded Finder that provided something like true preemptive multitasking.

Project creep is a killer. Keep your project’s goals focused.

Sainsbury’s Warehouse Automation

Sainsbury’s , the British supermarket giant, was determined to install an automated fulfillment system in its Waltham Point distribution center in Essex. Waltham Point was the distribution center for much of London and southeast England, and the barcode-based fulfillment system would increase efficiency and streamline operations. If it worked, that is.

Installed in 2003, the system promptly ran into what were then described as “horrendous” barcode-reading errors. Regardless, in 2005 the company claimed the system was operating as intended. Two years later, the entire project was scrapped, and Sainsbury’s wrote off £150 million in IT costs. (That’s $265,335,000 calculated by today’s exchange rate, enough to buy a lot of groceries.)

A square peg in a round hole won’t fit any better as time goes on. Put another way — problems that go unaddressed at rollout will only get worse, not better, over time.

Canada’s Gun Registration System

In June 1997, Electronic Data Systems and U.K.-based SHL Systemhouse started work on a Canadian national firearm registration system. The original plan was for a modest IT project that would cost taxpayers only $2 million — $119 million for implementation, offset by $117 million in licensing fees.

But then politics got in the way. Pressure from the gun lobby and other interest groups resulted in more than 1,000 change orders in just the first two years. The changes involved having to interface with the computer systems of more than 50 agencies, and since that integration wasn’t part of the original contract, the government had to pay for all the extra work. By 2001, the costs had ballooned to $688 million, including $300 million for support.

But that wasn’t the worst part. By 2001, the annual maintenance costs alone were running $75 million a year. A 2002 audit estimated that the program would wind up costing more than $1 billion by 2004 while generating revenue of only $140 million, giving rise to its nickname: “the billion-dollar boondoggle.”

The registry is still in operation and still a political football. Both the Canadian Police Association and the Canadian Association of Chiefs of Police have spoken in favor of it, while opponents argue that the money would be better spent otherwise.

Define your project scope and freeze specifications before the requests for changes get out of hand.

Three Current Projects in Danger

At least Canada managed to get its project up and running. Our final three projects, courtesy of the U.S. government, are still in development — they have failed in many ways already, but can still fail more. Will anyone learn anything from them? After reading these other stories, we know how we’d bet.

In 2000, the FBI finally decided to get serious about automating its case management and forms processing, and in September of that year, Congress approved $379.8 million for the Information Technology Upgrade Project. What started as an attempt to upgrade the existing Automated Case Support system became, in 2001, a project to develop an entirely new system, the Virtual Case File (VCS), with a contract awarded to Science Applications International Corp.

That sounds reasonable until you read about the development time allotted (a mere 22 months), the rollout plans (a “flash cutover,” in which the new system would come online and the old one would go offline over a single weekend), and the system requirements (an 800-page document specifying details down to the layout of each page).

By late 2002, the FBI needed another $123.2 million for the project. And change requests started to take a toll: According to SAIC, those totaled about 400 by the end of 2003. In April 2005, SAIC delivered 700,000 lines of code that the FBI considered so bug-ridden and useless that the agency decided to scrap the entire VCS project. A later audit blamed factors such as poorly defined design requirements, an overly ambitious schedule and the lack of an overall plan for purchases and deployment.

The FBI did use some of what it learned from the VCF disaster in its current Sentinel project. Sentinel, now scheduled for completion in 2012, should do what VCF was supposed to do using off-the-shelf, Web-based software.

The U.S. Department of Homeland Security is bolstering the U.S. Border Patrol with a network of radar, satellites, sensors and communication links — what’s commonly referred to as a “virtual fence.” In September 2006, a contract for this Secure Border Initiative Network (SBInet, not to be confused with Skynet) was awarded to Boeing, which was given $20 million to construct a 28-mile pilot section along the Arizona-Mexico border.

But early this year, Congress learned that the pilot project was being delayed because users had been excluded from the process and the complexity of the project had been underestimated. (Sound familiar?) In February 2008, the Government Accountability Office reported that the radar meant to detect aliens coming across the border could be set off by rain and other weather, and the cameras mean to zoom in on subjects sent back images of uselessly low resolution for objects beyond 3.1 miles. Also, the pilot’s communications system interfered with local residents’ WiFi networks — not good PR.

In April, DHS announced that the surveillance towers of the pilot fence did not meet the Border Patrol’s goals and were being replaced — a story picked up by the Associated Press and widely reported in the mainstream media. But the story behind the story is less clear. The DHS and Boeing maintain the original towers were only temporary installations for demonstration purposes. Even so, the project is already experiencing delays and cost overruns, and in April, SBInet program manager Kirk Evans resigned , citing lack of a system design as just one specific concern. Not an auspicious beginning.

Back in 2006, the U.S. Census Bureau made a plan to use 500,000 handheld devices — purchased from Harris Corp. under a $600 million contract — to help automate the 2010 census. Now, though, the cost has more than doubled, and their use is going to be curtailed in 2010 — but the Census Bureau is moving ahead with the project anyway.

During a rehearsal for the census conducted in the fall of 2007, according to the GAO, field staff found that the handheld devices froze or failed to retrieve mapping coordinates (see Hard questions needed to save projects for details). Furthermore, multiple devices had the same identification number, which meant they would overwrite one another’s data.

After the rehearsal, a representative of Mitre Corp. , which advises the bureau on IT matters, brought notes to a meeting with the bureau’s representative that read, “It is not clear that the system will meet Census’ operational needs and quality goals. The final cost is unpredictable. Immediate, significant changes are required to rescue the program. However, the risks are so large considering the available time that we recommend immediate development of contingency plans to revert to paper operations.”

There you have it, a true list of IT Ig Nobels: handheld computers that don’t work as well as pencil and paper, new systems that are slower and less capable than the old ones they’re meant to replace. Perhaps the overarching lesson is one that project managers should have learned at their mothers’ knees: Don’t bite off more than you can chew.

No Prize for IT

Information technology has rarely won an Ig Nobel award in the 18 years the prizes have been doled out by the Improbable Research organization.

Should we take the snub personally?

Marc Abrahams, the editor of Improbable Research , the organization’s blog, says he thinks IT’s relative absence is simply because the field is younger than other disciplines. “Certainly IT offers the same level of absurdity as other areas of research,” he says comfortingly.

He points out that Murphy’s Law, whose three “inventors” (John Paul Stapp, Edward A. Murphy, Jr. and George Nichols,) were honored with an Ig Nobel in 2003, sprang from an IT-like project in the late 1940s. Murphy was an electrical engineer who was brought in to help the Air Force figure out why safety tests they were conducting weren’t producing any results. Murphy discovered that the electronic monitoring systems had been installed “backwards and upside down,” according to Abrahams, which discovery caused him to mutter the first version of the law that bears his name.

Other Ig Nobels drawn from the world of technology include:

2001: John Keogh of Hawthorn, Victoria, Australia, won in the Technology category for patenting the wheel; he shared the award with the Australian Patent Office, which granted him Innovation Patent #2001100012 (pdf) for a “circular transportation facilitation device.”

2000: Chris Niswander of Tucson, Ariz., won a Computer Science Ig Nobel for his development of PawSense , software that can tell when a cat is walking across your keyboard and make a sound to scare it off.

1997: Sanford Wallace — yes, that Sanford Wallace — of Cyber Promotions takes the Communications Ig Nobel for being the Spam King .

The Ig Nobels, it must be remembered, aren’t into value judgments.

San Francisco-based Widman is a frequent contributor to Computerworld .

  • Collections
  • Publications
  • K-12 Students & Educators
  • Families & Community Groups
  • Colleges & Universities
  • Business & Government Leaders
  • Make a Plan
  • Exhibits at the Museum
  • Tours & Group Reservations
  • Customize It
  • This is CHM
  • Ways to Give
  • Donor Recognition
  • Institutional Partnerships
  • Upcoming Events
  • Hours & Directions
  • Subscribe Now

This Day in History | Computer History Museum

What Happened on September 5th

IBM STRETCH console

Portions of the BYU STRETCH, as well as the entire STRETCH from Lawrence Livermore Labs, are in the permanent collection of The Computer History Museum.

ibm stretch project failure case study

Advertisement

EDN Logo.

Voice of the Engineer

IBM retires 7030 “STRETCH” computer, June 21, 1981

ibm stretch project failure case study

The 7030 premiered in 1961, performing at speeds approximately 30 times faster than the most advanced computer in the world at that time. Unfortunately, even with such advances, the 7030 did not meet promises made by IBM.

Big Blue aggressively estimated speeds “at least 100 times greater than that of existing machines” in its proposal for the government funding that allowed the 7030 to be built. It was, indeed, going to be a “stretch” of IBM’s collected intellectual capacity and thus the machine’s nickname was born. 

With speeds lower than anticipated, IBM brought down the price on the 7030 from $13.5 million to $7.8 million. At the lower price, IBM was losing money on each build, and only eight units were sold. Although the 7030 failed to meet its goals, it did remain the world’s fastest computer until the first CDC 6600 became operational in 1964.

Neuchips Driving AI Innovations in Inferencing

The 7030 brought about many technologies incorporated in future machines that were highly successful, including the 8-bit character called a “byte” and other technologies still used in current high-performance systems.

Indeed, reports state that Stephen Dunwell, the project manager who faced blame when STRETCH failed commercially, pointed out soon after the successful 1964 launch of IBM’s System/360 that most of its core concepts were pioneered by the 7030.  Based on the 7030’s groundwork, as well as other accomplishments, Dunwell was named an IBM Fellow. He provides an oral history of the project here .

You can visit this IBM page to read the original press fact sheet on the 7030.

Related articles :

  • IBM dedicates Harvard Mark I, August 7, 1944
  • IBM intros 1st computer disk storage unit, September 13, 1956
  • Slideshow: The top 5 fastest supercomputers and their power management challenges
  • Cray -1 super computer: The power supply

For more moments in tech history, see this blog . EDN strives to be historically accurate with these postings. Should you see an error, please notify us .

Editor’s note: This article was originally posted on June 21, 2012, and edited on June 21, 2019.

div-gpt-ad-inread

0 comments on “ IBM retires 7030 “STRETCH” computer, June 21, 1981 ”

Leave a reply cancel reply.

You must Sign in or Register to post a comment.

[ninja_form id=2]

ibm stretch project failure case study

Tom Chatfield

The main reason for it failures.

My latest BBC column, republished here for UK readers, looks at some of the dispiritingly enduring human reasons behind IT project failures.

The UK’s National Health Service may seem like a parochial subject for this column. But with 1.7 million employees and a budget of over £100 billion, it is the world’s fifth biggest employer – beaten only by McDonalds, Walmart, the Chinese Army, and the US Department of Defence. And this means its successes and failures tend to provide salutary lessons for institutions of all sizes.

Take the recent revelation that an abandoned attempt to upgrade its computer systems will cost over £9.8 billion – described by the Public Accounts Committee as one of the “worst and most expensive contracting fiascos” in the history of the public sector.

This won’t come as a surprise to anyone who has worked on large computing projects. Indeed, there’s something alarmingly monotonous to most litanies of tech project failure. Planning tends to be inadequate, with projected timings and budgets reflecting wishful thinking rather than a robust analysis of requirements. Communication breaks down, with side issues dominating discussions to the exclusion of core functions. And the world itself moves on, turning yesterday’s technical marvel into tomorrow’s white elephant, complete with endless administrative headaches and little scope for technical development.

Statistically, there are few fields more prone to extravagant failure. According to a 2011 study of 1,471 ICT projects by Alexander Budzier and Bent Flyvbjerg of Oxford’s Said Business School, one in every six ICT projects costs at least three times as much as initially estimated: around twenty times the rate at which projects in fields like construction go this wrong.

But if costly IT failures are a grimly unsurprising part of 21st-Century life, what’s revealing is not so much what went wrong this time as why the same mistakes continue to be repeated. Similar factors were, for example, in evidence during one of the first and most famous project management failures in computing history: the IBM 7030 Stretch supercomputer.

Begun in 1956, IBM’s goal was to build a machine at least one hundred times more powerful than its previous system, the IBM 704. This target won a prestigious contract with the Los Alamos National Laboratory – and, in 1960, the machine’s price was set at $13.5 million, with negotiation beginning for other orders.

The only problem was that, when a working version was actually tested in 1961, it turned out to be just 30 times faster than its predecessor. Despite containing a number of innovations that would prove instrumental in the future of computing, the 7030 had dismally failed to meet its target – and IBM had failed to realise what was going on until too late. The company’s CEO announced that the price of the nine systems already ordered would be cut by almost $6 million each – below cost price – and that no further machines would be made or sold. Cheaper, nimbler competitors stepped into the gap.

Are organisations prone to a peculiar blindness around all things digital? Is there something special about information technology that invites unrealistic expectations?

I would suggest that there is – and that one reason is the disjunction between problems as a business sees them, and problems seen in terms of computer systems. Consider the health service. The idea of moving towards an entirely electronic system of patient records makes excellent sense – but bridging the gap between this pristine goal and the varied, interlocking ways in which 1.7 million employees currently work is a fiendish challenge. IBM faced a far simpler proposition, on paper: make a machine one hundred times faster than their previous best. But the transition from paper to reality entailed difficulties that didn’t even exist until new components had been built, complete with new dead ends and frustrations.

All projects face such challenges. With digital systems, though, the frame of reference is not so much the real world as an abstracted vision of what may be possible. The sky is the limit – and big talk has a good chance of winning contracts. Yet there’s an inherent divide between the real-world complexities of any situation and what’s required to get these onscreen. Computers rely on models, systems and simplifications which we have built in order to render ourselves comprehensible to them. And the great risk is that we simply don’t understand ourselves, or our situation, well enough to explain it to them.

We may think we do, of course, and propose astounding solutions to complex problems – only to discover that what we’ve “solved” looks very little like what we wanted or needed. In the case of almost every sufficiently large computing project, in fact, the very notion of solving a small number of enormous problems is an almost certain recipe for disaster, given that beneath such grandeur lurk countless conflicting requirements just waiting to be discovered.

If there is hope, it lies not in endlessly anatomizing those failures we seem fated to repeat, but in better understanding the fallibilities that push us towards them. And this means acknowledging that people often act like idiots when asked to explain themselves in terms machines can understand.

You might call it artificial stupidity: the tendency to scrawl our hopes and biases across a digital canvas without pausing to ask what reality itself will support. We, not our machines, are the problem – and any solution begins with embracing this.

Such modesty is a tough sell, especially when it’s up against polished solutionism and obfuscation – both staples of debate between managers and technicians since well before the digital era. The alternative, though, doesn’t bear thinking about: an eternity of over-promising and under-delivering. Not to mention wondering why the most powerful tools we’ve ever built only seem to offer more opportunities for looking stupid.

  • Generative AI
  • Office Suites
  • Collaboration Software
  • Productivity Software
  • Augmented Reality
  • Emerging Technology
  • Remote Work
  • Artificial Intelligence
  • Operating Systems
  • IT Leadership
  • IT Management
  • IT Operations
  • Cloud Computing
  • Computers and Peripherals
  • Data Center
  • Enterprise Applications
  • Vendors and Providers
  • Enterprise Buyer’s Guides
  • United States
  • Netherlands
  • United Kingdom
  • New Zealand
  • Newsletters
  • Foundry Careers
  • Terms of Service
  • Privacy Policy
  • Cookie Policy
  • Copyright Notice
  • Member Preferences
  • About AdChoices
  • E-commerce Affiliate Relationships
  • Your California Privacy Rights

Our Network

  • Network World

jwidman

IT’s biggest project failures — and what we can learn from them

Think your project's off track and over budget learn a lesson or two from the tech sector's most infamous project flameouts..

Every year, the Improbable Research organization hands out Ig Nobel prizes to research projects that “first make people laugh, and then make them think.”

For example, this year’s Ig Nobel winners , announced last week, include a prize in nutrition to researchers who electronically modified the sound of a potato chip to make it appear crisper and fresher than it really is and a biology prize to researchers who determined that fleas that live on a dog jump higher than fleas that live on a cat. Last year, a team won for studying how sheets become wrinkled.

That got us thinking: Though the Ig Nobels haven’t given many awards to information technology (see No Prize for IT for reasons why), the history of information technology is littered with projects that have made people laugh — if you’re the type to find humor in other people’s expensive failures. But have they made us think? Maybe not so much. “IT projects have terrible track records. I just don’t get why people don’t learn,” says Mark Kozak-Holland, author of Titanic Lessons for IT Projects (that’s Titanic as in the ship, by the way).

When you look at the reasons for project failure, “it’s like a top 10 list that just repeats itself over and over again,” says Holland, who is also a senior business architect and consultant with HP Services . Feature creep? Insufficient training? Overlooking essential stakeholders? They’re all on the list — time and time again.

A popular management concept these days is “failing forward” — the idea that it’s OK to fail so long as you learn from your failures. In the spirit of that motto and of the Ig Nobel awards, Computerworld presents 11 IT projects that may have “failed” — in some cases, failed spectacularly — but from which the people involved were able to draw useful lessons.

You’ll notice that many of them are government projects. That’s not necessarily because government fails more often than the private sector, but because regulations and oversight make it harder for governments to cover up their mistakes. Private enterprise, on the other hand, is a bit better at making sure fewer people know of its failures.

So here, in chronological order, are Computerworld ‘s favorite IT boondoggles, our own Ig Nobels. Feel free to laugh at them — but try and learn something too.

IBM’s Stretch project

In 1956, a group of computer scientists at IBM set out to build the world’s fastest supercomputer. Five years later, they produced the IBM 7030 — a.k.a. Stretch — the company’s first transistorized supercomputer, and delivered the first unit to the Los Alamos National Laboratory in 1961. Capable of handling a half-million instructions per second, Stretch was the fastest computer in the world and would remain so through 1964.

Nevertheless, the 7030 was considered a failure. IBM’s original bid to Los Alamos was to develop a computer 100 times faster than the system it was meant to replace, and the Stretch came in only 30 to 40 times faster. Because it failed to meet its goal, IBM had to drop Stretch’s price to $7.8 million from the planned $13.5 million, which meant the system was priced below cost. The company stopped offering the 7030 for sale, and only nine were ever built.

That wasn’t the end of the story, however. “A lot of what went into that effort was later helpful to the rest of the industry,” said Turing Award winner and Stretch team member Fran Allen at a recent event marking the project’s 50th anniversary. Stretch introduced pipelining, memory protection, memory interleaving and other technologies that have shaped the development of computers as we know them.

Lesson learned

Don’t throw the baby out with the bathwater. Even if you don’t meet your project’s main goals, you may be able to salvage something of lasting value from the wreckage.

Knight-Ridder’s Viewtron service

The Knight-Ridder media giant was right to think that the future of home information delivery would be via computer. Unfortunately, this insight came in the early 1980s, and the computer they had in mind was an expensive dedicated terminal.

Knight-Ridder launched its Viewtron version of videotex — the in-home information-retrieval service — in Florida in 1983 and extended it to other U.S. cities by 1985. The service offered banking, shopping, news and ads delivered over a custom terminal with color graphics capabilities beyond those of the typical PC of the time. But Viewtron never took off: It was meant to be the the “McDonald’s of videotex” and at the same time cater to upmarket consumers, according to a Knight-Ridder representative at the time who apparently didn’t notice the contradictions in that goal.

A Viewtron terminal cost $900 initially (the price was later dropped to $600 in an attempt to stimulate demand); by the time the company made the service available to anyone with a standard PC, videotex’s moment had passed.

Viewtron only attracted 20,000 subscribers, and by 1986, it had been canceled. But not before it cost Knight-Ridder $50 million. The New York Times business section wrote, with admirable understatement, that Viewtron “tried to offer too much to too many people who were not overly interested.”

Nevertheless, BusinessWeek concluded at the time, “Some of the nation’s largest media, technology and financial services companies … remain convinced that some day, everyday life will center on computer screens in the home.” Can you imagine?

Sometimes you can be so far ahead of the curve that you fall right off the edge.

DMV projects — California and Washington

Two Western states spent the 1990s attempting to computerize their departments of motor vehicles, only to abandon the projects after spending millions of dollars. First was California, which in 1987 embarked on a five-year, $27 million plan to develop a system for keeping track of the state’s 31 million drivers’ licenses and 38 million vehicle registrations. But the state solicited a bid from just one company and awarded the contract to Tandem Computers. With Tandem supplying the software, the state was locked into buying Tandem hardware as well, and in 1990, it purchased six computers at a cost of $11.9 million.

That same year, however, tests showed that the new system was slower than the one it was designed to replace. The state forged ahead, but in 1994, it was finally forced to abandon what the San Francisco Chronicle described as “an unworkable system that could not be fixed without the expenditure of millions more.” In that May 1994 article, the Chronicle described it as a “failed $44 million computer project.” In an August article, it was described as a $49 million project, suggesting that the project continued to cost money even after it was shut down. A state audit later concluded that the DMV had “violated numerous contracting laws and regulations.”

Regulations are there for a reason, especially ones that keep you from doing things like placing your future in the hands of one supplier.

Meanwhile, the state of Washington was going through its own nightmare with its License Application Mitigation Project (LAMP). Begun in 1990, LAMP was supposed to cost $16 million over five years and automate the state’s vehicle registration and license renewal processes. By 1992, the projected cost had grown to $41.8 million; a year later, $51 million; by 1997, $67.5 million. Finally, it became apparent that not only was the cost of installing the system out of control, but it would also cost six times as much to run every year as the system it was replacing. Result: plug pulled, with $40 million spent for nothing.

When a project is obviously doomed to failure, get out sooner rather than later.

FoxMeyer ERP program

In 1993, FoxMeyer Drugs was the fourth largest distributor of pharmaceuticals in the U.S., worth $5 billion. In an attempt to increase efficiency, FoxMeyer purchased an SAP system and a warehouse automation system and hired Andersen Consulting to integrate and implement the two in what was supposed to be a $35 million project. By 1996, the company was bankrupt; it was eventually sold to a competitor for a mere $80 million.

The reasons for the failure are familiar. First, FoxMeyer set up an unrealistically aggressive time line — the entire system was supposed to be implemented in 18 months. Second, the warehouse employees whose jobs were affected — more accurately, threatened — by the automated system were not supportive of the project, to say the least. After three existing warehouses were closed, the first warehouse to be automated was plagued by sabotage, with inventory damaged by workers and orders going unfilled.

Finally, the new system turned out to be less capable than the one it replaced: By 1994, the SAP system was processing only 10,000 orders a night, compared with 420,000 orders under the old mainframe. FoxMeyer also alleged that both Andersen and SAP used the automation project as a training tool for junior employees, rather than assigning their best workers to it.

In 1998, two years after filing for bankruptcy , FoxMeyer sued Andersen and SAP for $500 million each, claiming it had paid twice the estimate to get the system in a quarter of the intended sites. The suits were settled and/or dismissed in 2004.

No one plans to fail, but even so, make sure your operation can survive the failure of a project.

Apple’s Copland operating system

It’s easy to forget these days just how desperate Apple Computer was during the 1990s. When Microsoft Windows 95 came out, it arrived with multitasking and dynamic memory allocation, neither of which was available in the existing Mac System 7. Copland was Apple’s attempt to develop a new operating system in-house; actually begun in 1994, the new OS was intended to be released as System 8 in 1996.

Copland’s development could be the poster child for feature creep. As the new OS came to dominate resource allocation within Apple, project managers began protecting their fiefdoms by pushing for their products to be incorporated into System 8. Apple did manage to get one developers’ release out in late 1996, but it was wildly unstable and did little to increase anyone’s confidence in the company.

Before another developer release could come out, Apple made the decision to cancel Copland and look outside for its new operating system; the outcome, of course, was the purchase of NeXT, which supplied the technology that became OS X.

Copland did not die in vain. Some of the technology seen in demos eventually turned up in OS X. And even before that, some Copland features wound up in System 8 and 9, including a multithreaded Finder that provided something like true preemptive multitasking.

Project creep is a killer. Keep your project’s goals focused.

Sainsbury’s warehouse automation

Sainsbury’s, the British supermarket giant, was determined to install an automated fulfillment system in its Waltham Point distribution center in Essex. Waltham Point was the distribution center for much of London and southeast England, and the barcode-based fulfillment system would increase efficiency and streamline operations. If it worked, that is.

Installed in 2003, the system promptly ran into what were then described as “horrendous” barcode-reading errors. Regardless, in 2005 the company claimed the system was operating as intended. Two years later, the entire project was scrapped, and Sainsbury’s wrote off £150 million in IT costs. (That’s $265,335,000 calculated by today’s exchange rate, enough to buy a lot of groceries.)

A square peg in a round hole won’t fit any better as time goes on. Put another way — problems that go unaddressed at rollout will only get worse, not better, over time.

Canada’s gun registration system

In June 1997, Electronic Data Systems and U.K.-based SHL Systemhouse started work on a Canadian national firearm registration system. The original plan was for a modest IT project that would cost taxpayers only $2 million — $119 million for implementation, offset by $117 million in licensing fees.

But then politics got in the way. Pressure from the gun lobby and other interest groups resulted in more than 1,000 change orders in just the first two years. The changes involved having to interface with the computer systems of more than 50 agencies, and since that integration wasn’t part of the original contract, the government had to pay for all the extra work. By 2001, the costs had ballooned to $688 million, including $300 million for support.

But that wasn’t the worst part. By 2001, the annual maintenance costs alone were running $75 million a year. A 2002 audit estimated that the program would wind up costing more than $1 billion by 2004 while generating revenue of only $140 million, giving rise to its nickname: “the billion-dollar boondoggle.”

The registry is still in operation and still a political football. Both the Canadian Police Association and the Canadian Association of Chiefs of Police have spoken in favor of it, while opponents argue that the money would be better spent otherwise.

Define your project scope and freeze specifications before the requests for changes get out of hand.

Three current projects in danger

At least Canada managed to get its project up and running. Our final three projects, courtesy of the U.S. government, are still in development — they have failed in many ways already, but can still fail more. Will anyone learn anything from them? After reading these other stories, we know how we’d bet.

FBI Virtual Case File

In 2000, the FBI finally decided to get serious about automating its case management and forms processing, and in September of that year, Congress approved $379.8 million for the Information Technology Upgrade Project. What started as an attempt to upgrade the existing Automated Case Support system became, in 2001, a project to develop an entirely new system, the Virtual Case File (VCS), with a contract awarded to Science Applications International Corp.

That sounds reasonable until you read about the development time allotted (a mere 22 months), the rollout plans (a “flash cutover,” in which the new system would come online and the old one would go offline over a single weekend), and the system requirements (an 800-page document specifying details down to the layout of each page).

By late 2002, the FBI needed another $123.2 million for the project. And change requests started to take a toll: According to SAIC, those totaled about 400 by the end of 2003. In April 2005, SAIC delivered 700,000 lines of code that the FBI considered so bug-ridden and useless that the agency decided to scrap the entire VCS project. A later audit blamed factors such as poorly defined design requirements, an overly ambitious schedule and the lack of an overall plan for purchases and deployment.

The FBI did use some of what it learned from the VCF disaster in its current Sentinel project. Sentinel, now scheduled for completion in 2012, should do what VCF was supposed to do using off-the-shelf, Web-based software.

Homeland Security’s virtual fence

The U.S. Department of Homeland Security is bolstering the U.S. Border Patrol with a network of radar, satellites, sensors and communication links — what’s commonly referred to as a “virtual fence.” In September 2006, a contract for this Secure Border Initiative Network (SBInet, not to be confused with Skynet) was awarded to Boeing, which was given $20 million to construct a 28-mile pilot section along the Arizona-Mexico border.

But early this year, Congress learned that the pilot project was being delayed because users had been excluded from the process and the complexity of the project had been underestimated. (Sound familiar?) In February 2008, the Government Accountability Office reported that the radar meant to detect aliens coming across the border could be set off by rain and other weather, and the cameras mean to zoom in on subjects sent back images of uselessly low resolution for objects beyond 3.1 miles. Also, the pilot’s communications system interfered with local residents’ WiFi networks — not good PR.

In April, DHS announced that the surveillance towers of the pilot fence did not meet the Border Patrol’s goals and were being replaced — a story picked up by the Associated Press and widely reported in the mainstream media. But the story behind the story is less clear. The DHS and Boeing maintain the original towers were only temporary installations for demonstration purposes. Even so, the project is already experiencing delays and cost overruns, and in April, SBInet program manager Kirk Evans resigned , citing lack of a system design as just one specific concern. Not an auspicious beginning.

Census Bureau’s handheld units

Back in 2006, the U.S. Census Bureau made a plan to use 500,000 handheld devices — purchased from Harris Corp. under a $600 million contract — to help automate the 2010 census. Now, though, the cost has more than doubled, and their use is going to be curtailed in 2010 — but the Census Bureau is moving ahead with the project anyway.

During a rehearsal for the census conducted in the fall of 2007, according to the GAO, field staff found that the handheld devices froze or failed to retrieve mapping coordinates (see Hard questions needed to save projects for details). Furthermore, multiple devices had the same identification number, which meant they would overwrite one another’s data.

After the rehearsal, a representative of Mitre Corp. , which advises the bureau on IT matters, brought notes to a meeting with the bureau’s representative that read, “It is not clear that the system will meet Census’ operational needs and quality goals. The final cost is unpredictable. Immediate, significant changes are required to rescue the program. However, the risks are so large considering the available time that we recommend immediate development of contingency plans to revert to paper operations.”

There you have it, a true list of IT Ig Nobels: handheld computers that don’t work as well as pencil and paper, new systems that are slower and less capable than the old ones they’re meant to replace. Perhaps the overarching lesson is one that project managers should have learned at their mothers’ knees: Don’t bite off more than you can chew.

San Francisco-based Widman is a frequent contributor to Computerworld .

Related content

Meta opens its mixed-reality horizon os to other headset makers, a crafty new android notification power-up, microsoft uses its genai leverage against china — prelude to a tech cold war, how to fix icloud sync in seconds, from our editors straight to your inbox.

jwidman

Jake Widman is a freelance writer in San Francisco and a regular contributor to Computerworld , PCWorld , and TechHive .

More from this author

7 quick base tips and tricks, ar and vr bring a new twist to collaboration, ar in the enterprise: tips for a better augmented reality app, what is quick base a low-code database platform for citizen developers, most popular authors.

ibm stretch project failure case study

Show me more

Gen z workers pick genai over managers for career advice.

Image

Adobe’s new Firefly Image 3 adds genAI features to Photoshop

Image

Enterprises want AI PCs, just not yet

Image

Why the world will be wearing more technology in the future

Image

Is AR/VR set for another growth spurt? | Ep. 143

Image

Voice cloning, song creation via AI gets even scarier

Image

More tech layoffs as AI takes hold

Image

Is AR/VR set for another growth spurt?

Image

  • Contact sales

Start free trial

12 Notorious Failed Projects & What We Can Learn from Them

ProjectManager

Failure is an unavoidable part of any project process: it’s the degree of failure that makes the difference. If a task fails, there are ways to reallocate resources and get back on track. But a systemic collapse will derail the whole project.

Why Is It Important to Analyze Failed Projects?

What good can come from failure? A lot, actually. Sometimes a project reaches too far beyond its means and fails, which is unfortunate but can also serve as a teaching moment. If project managers don’t learn from their mistakes, then they’re not growing professionally and will revisit the same problem in future projects.

Project managers can learn as much, if not more, from failed projects as they can from successful ones. A post-mortem analysis should be part of any project plan, and especially so when a project crashes and burns. There are valuable lessons in those ashes.

One lesson is that project management software decreases the chance of a failed project. ProjectManager is award-winning project management software that allows you to monitor your work in real time to make more insightful decisions that can keep failure at bay. Use our real-time dashboards to track the health of your project, including such important key performance indicators (KPIs) as time, cost and more. There’s no time-consuming setup required as with lightweight software. Our dashboard is ready when you are. Get started with ProjectManager today for free.

ProjectManager's real-time dashboard helps you avoid project failure

12 Top Failed Projects from History

Let’s look at the most notorious failed projects, not to gloat, but to see what they can tell us about project management .

1. Sony Betamax

The word Betamax has become almost synonymous with failure. But when it was first released, Betamax was supposed to become the leader in the cassette recording industry. Developed by Sony, Betamax was introduced in the mid-1970s but was unable to get traction in the market, where JVC’s VHS technology was king.

Surprisingly, Sony continued to produce Betamax all the way into 2016. Long before it discontinued the technology, Betamax was already irrelevant.

Betamax was an innovative product, and it even got to market before VHS. But soon the market had options that were cheaper and better than Betamax, making it a failed project. Sony’s mistake was thinking that the project was complete once the product went to market . Project managers need to always follow up on their work, analyze the data and make an evaluation about what needs to be done to keep the project relevant.

2. New Coke

Coca-Cola is one of the most iconic brands in the world. It’d take a lot to tarnish that reputation. But that’s just what happened when New Coke was introduced in 1985. People didn’t know why the Coke they loved and drank regularly was being replaced.

The company knew why. They were looking to improve quality and make a splash in the marketplace. The fact is, New Coke sunk like a stone. It wasn’t like New Coke was just released without doing market research , though it might seem that way. In fact, the new recipe was tested on 200,000 people, who preferred it to the older version.

But after spending $4 million in development and losing another $30 million in backstocked products, the taste for New Coke evaporated. Consumers can be very loyal to a product, and once they get into a habit, it can be very difficult to break them off it in favor of something different.

It’s not that Coca-Cola neglected market research to see if there was a need to develop a new product, but they were blind to their own customers’ motivations. New Coke was a failed project because the researchers needed to do more than a mere taste test.

They needed to understand how people would react when the familiar Coke they loved would be discontinued and replaced by a shiny new upstart. Market research must be handled like a science and an art—and worked into the project plan accordingly.

3. Pepsi Crystal

In 1992, Pepsi launched Pepsi Crystal. It was a unique soft drink in that there was no color. It was as clear as water. Pepsi hoped to take advantage of the growing trend for purity and health. Pepsi marketed the new drink as pure, caffeine-free and an alternative to the unhealthy traditional colas.

At first, sales looked good. The first year saw about $470 million in sales. Consumers were curious to find out if the taste was the same as Pepsi, which it was. Other colorless soft drinks started to introduce themselves to the market, such as 7Up and Sprite. But what Pepsi and the copycats didn’t take into account was how much sight influences flavor. Consumers found the product bland and sales tanked.

Pepsi Crystal was mocked on Saturday Night Live and Time Magazine listed it in its top-10 marketing failures of the 20th century.

Pepsi made the mistake of ignoring all the senses that are involved in the consummation of their product. They should have done more testing. If so, they would have realized the importance of the look of the product. Pepsi Crystal thought that a clear-looking liquid would indicate a healthy one, but what was registered by the majority of users was a bland one.

4. Ford Edsel

Ford released its Edsel model in 1957. Since then, the name has become synonymous with project planning failure. That’s an accomplishment, but not the type that Ford was hoping for. This was supposed to be the car for the middle class and Ford invested $250 million into the Edsel.

Ford ended up losing $350 million on the gas-guzzler that the public found an unattractive alternative to other cars on the market. Part of the problem was that the first Edsels had oil leaks, hoods that stuck, trunks that wouldn’t open and more issues that soured consumer confidence in the product.

The Ford was a lesson in egos at the company ignoring what the research was telling them. Ford conducted many polls to find out what Americans wanted in a car, including a name. But executives went with Edsel. The design of the car didn’t even consult the polls.

If you’re going to do polling on what the public wants, it is a poor decision to ignore that data . So much time and effort went into coming up with the name, even hiring modernist poet Marianne Moore (who came up with nothing marketable), that Ford neglected to determine if there was even a market for this new car.

5. Airbus A380

Boeing’s Airbus A380 was viewed as a way for the company to outdo the 747. It spent more than $30 billion on product development in the belief that the industry would embrace a bigger plane that could hold more passengers and increase revenue.

In fact, the Airbus A380 has sold well short of its predicted 1200 units. The plane was headed for the scrap heap as it faced obstacles such as airports having to build special infrastructure and gates to accommodate that massive plane. Those project costs would be handed back to the airlines. That’s going to sour the deal and it did.

Then there were the technical issues. Qantas had to ground its entire A380 fleet after an engine blew up. You’d think that engineers would have thought beyond having more passengers seated on a bigger plane. But they didn’t.

The biggest lesson is that just because you build it doesn’t mean that anyone is going to want it. There wasn’t the demand Boeing believed there to be. Industries and markets are fickle. Just because airlines say they want something today doesn’t mean they’ll want it tomorrow. Boeing should have hedged its bets.

6. World Athletics Championships 2019

Doha is the capital of Qatar and the site of the World Athletics Championships in 2019. The world’s best athletes went there to compete against one another, but the big event turned out to be an even bigger dud.

The problem was that the host nation was unable to sell most of the tickets to the event. Some of the greatest athletes in the world were forced to compete in stadiums that were nearly empty. It was a failure and an embarrassment.

Money is needed to plan for an event , but that investment is no guarantee that people will show up. The mistake was thinking there was a large enough fanbase to sell all the tickets. We keep coming back to this, but it deserves to be mentioned again: research is critical. It wouldn’t have taken much to determine if there were enough interested people to bring a return on the investment.

7. Garden Bridge

Vanity projects tend not to care about success or failure. They’re driven by ego and such was the case with the Garden Bridge. It was the brainchild of Boris Johnson when he was Mayor of London.

This construction project cost 53 million pounds, which is a lot of money, especially when considering it was never even built. The idea of a bridge made of gardens for city dwellers to enjoy is fine, but the over-optimistic fundraising targets and the ballooning costs led to its spectacular failure.

Projects must be realistic. It’s good to remember SMART goals , which is an acronym for specific, measurable, achievable, relevant and time-bound. If the project followed those constraints it might have been built or passed on before all that money was spent.

8. Apple Lisa

Before Apple became synonymous with the personal computer (and long before popular products such as the iPhone), it released Lisa. It costs $10,000 with a processor of 5 MHz and 1 MB of RAM. The first model sold only 10,000 units.

Lisa was fated to fail because it was really a prototype. It was marketed as a game-changer in 1983 from its popular, but command-line-based Apple II. The price is certainly one reason why this was not a realistic personal computer, but there were technical issues. It had an operating system that could run multiple programs but was too powerful for its processor. Lisa ran sluggishly.

The truth is Lisa was less a failure than an expensive lesson. Lisa led to the Macintosh, which was basically a less expensive and more effective version of Lisa. The lesson here is that one can learn from failure if it doesn’t bankrupt the company, that is.

9. Dyson Electric Car

After four years and millions of dollars, James Dyson canceled his electric car project. It took that long to realize it wasn’t commercially viable. There is certainly a growing market for electric cars as the industry is motivated by consumers and government regulations to move from fossil fuels to more energy-efficient and sustainable alternatives.

There’s a boom in the production of electric cars, from major manufacturers such as Chrysler and Ford to startups such as Tesla. But sometimes the time isn’t right and no matter how good the idea is, it’s just not meant to be.

Timing is everything. But it’s also important to note how difficult it is to penetrate a market with established players. It takes a lot of capital and manufacturing expertise to start a car company and be competitive.

Related: 10 Free Manufacturing Excel Templates

10. Stretch Project

The Stretch project was initiated in 1956 by a group of computer scientists at IBM who wanted to build the world’s fastest supercomputer. The result of this five-year project was the IBM 7030, also known as Stretch. It was the company’s first transistorized supercomputer.

Though Stretch could handle a half-million instructions per second and was the fastest computer in the world up to 1964, the project was deemed a failure. Why? The project’s goal was to create a computer 100 times faster than what it was built to replace. Stretch was only about 30-40 times faster.

The planned budget was $13.5 million, but the price dropped to $7.8 million; so the computer was at least completed below cost. Only nine supercomputers were built.

While the project was a failure in that it never achieved the goal it set, there was much IBM could salvage from the project. Stretch introduced pipelining, memory protection, memory interleaving and other technologies that helped with the development of future computers.

Creative work is rooted in failure specifically because of the serendipitous discovery that occurs. This was a creative project, which might not have met its paper objective, but created a slew of useful technologies. So, aim for your goal, and who knows what good things you’ll discover along the way.

ibm stretch project failure case study

Get your free

Lessons Learned Template

Use this free Lessons Learned Template for Excel to manage your projects better.

11. Challenger Space Shuttle

The worst failure is one that results in the loss of life. When you’re dealing with highly complex and dangerous projects like NASA, there’s always a tremendous risk that needs to be tracked . On January 28, 1986, that risk became a horrible reality as the space shuttle Challenger exploded 73 seconds after launch.

The cause was a leak in one of the two solid rocket boosters that set off the main liquid fuel tank. The NASA investigation that followed said the failure was due to a faulty designed O-ring seal and the cold weather at launch, which allowed for the leak.

But it was not only a technical error that NASA discovered but human error. NASA officials went ahead with the launch even though engineers were concerned about the safety of the project. The engineers noted the risk of the O-ring, but their communications never traveled up to managers who could have delayed the launch to ensure the safety of the mission and its astronauts.

Managers are only as well-informed as their team. If they’re not opening lines of communication to access the data on the frontlines of a project, mistakes will be made, and in this case, fatal ones.

12. Computerized DMV

No one loves the DMV. If they were a brand, their reputation would be more than tarnished, it’d be buried. But everyone who drives a vehicle is going to have some interaction with this government agency. Unfortunately, they didn’t help their case in the 1990s when the states of California and Washington attempted to computerize their Departments of Motor Vehicles.

In California, the project began in 1987 as a five-year, $27 million plan to track its 31 million drivers’ licenses and 38 million vehicle registrations. Problems started at the beginning when the state solicited only one bid for the contract, Tandem Computers, locking the state into buying their hardware.

Then, to make things worse, tests showed that the new computers were even slower than the ones they were to replace. But the state moved forward with the project until 1994 when it had to admit failure and end the project. The San Francisco Chronicle reported that the project cost the state $49 million, and a state audit found that the DMV violated contracting laws and regulations.

The problem here is a project that isn’t following regulations. All projects must go through a process of due diligence, and legal and regulatory constraints must be part of that process. If the state had done that and the contract bidding process invited more than one firm to the table, then a costly mess could have been avoided, and our wait at the DMV might actually have become shorter.

How ProjectManager Prevents Failed Projects

ProjectManager keeps your projects from failing with a suite of project management tools that shepherd your project from initiation to a successful close. Plan, schedule and track work, while managing teams, with our online software.

Plan Every Last Detail

Successful projects begin with a strong plan. But it can be hard to keep all those tasks and due dates working together on a realistic schedule. What if some tasks are dependent? It gets complicated. But ProjectManager has an online Gantt chart that plots your tasks across a project timeline, linking dependencies and breaking projects into digestible milestones.

ibm stretch project failure case study

Track Progress as It Happens

ProjectManager keeps you on track with high-level monitoring via its real-time dashboard and more detailed data with one-click reporting . Now when projects start to veer off-track, you can get them back on course quickly.

project report to prevent failed projects

While we didn’t have an example, there are many projects that fail because they’re not equipped with the right tools for the job. ProjectManager is online project management software that gives project managers and their teams everything they need to plan, monitor and report on their project. Don’t let your next project fail; try ProjectManager with this free 30-day trial .

Click here to browse ProjectManager's free templates

Deliver your projects on time and under budget

Start planning your projects.

For IEEE Members

Ieee spectrum, follow ieee spectrum, support ieee spectrum, enjoy more free content and benefits by creating an account, saving articles to read later requires an ieee spectrum account, the institute content is only available for members, downloading full pdf issues is exclusive for ieee members, downloading this e-book is exclusive for ieee members, access to spectrum 's digital edition is exclusive for ieee members, following topics is a feature exclusive for ieee members, adding your response to an article requires an ieee spectrum account, create an account to access more content and features on ieee spectrum , including the ability to save articles to read later, download spectrum collections, and participate in conversations with readers and editors. for more exclusive content and features, consider joining ieee ., join the world’s largest professional organization devoted to engineering and applied sciences and get access to all of spectrum’s articles, archives, pdf downloads, and other benefits. learn more →, join the world’s largest professional organization devoted to engineering and applied sciences and get access to this e-book plus all of ieee spectrum’s articles, archives, pdf downloads, and other benefits. learn more →, access thousands of articles — completely free, create an account and get exclusive content and features: save articles, download collections, and talk to tech insiders — all free for full access and benefits, join ieee as a paying member., ibm’s fall from world dominance, tech historian james cortada has charted the company's many highs and lows—and thinks it's still a contender.

Steven Cherry Hi, this is Steven Cherry for IEEE Spectrum’s podcast, Fixing the Future.

IBM is a remarkable company, known for many things—the tabulating machines that calculated the 1890 U.S. Census, the mainframe computer, legitimizing the person computer, and developing the software that beat the best in the world at chess and then Jeopardy.

The company is, though, even more remarkable for the businesses it departed—often while they were still highly profitable—and pivoting to new ones before their profitability was obvious or assured.

The pivot people are most familiar with is the one into the PC market in the 1980s and then out of it in the 2000s. In fact, August 2020 marks the 40th anniversary of the introduction of the IBM PC. Joining me to talk about it—and IBM’s other pivots, past and future—is a person uniquely qualified to do so.

James Cortada is both a Ph.D. historian and a 38-year veteran of IBM. He’s currently a senior research fellow at the University of Minnesota’s Charles Babbage Institute , where he specializes in the history of technology. He was therefore perfectly positioned to be the author of the definitive corporate history of the company he used to work for, in a book entitled IBM: The Rise and Fall and Reinvention of a Global Icon , which was published in 2019 by MIT Press.

Cortada is also a contributor to IEEE Spectrum , most recently of an article this month entitled “ How the IBM PC Won, Then Lost, the Personal Computer Market ,” and in that sense I’m delighted to call him a colleague. He joins us by Skype.

Jim, welcome to the podcast.

James Cortada Delighted to be here.

Steven Cherry Jim, IBM wasn’t the first to personal computers. The first Apple computer was in 1976 and by 1981 the Apple II was firmly leading the market. Commodore, Tandy/RadioShack, and Osborne also had popular computers. More importantly, there was already an operating system, Digital Research’s CPM, that anchored the market and quite a bit of software was available for every computer that could run it: WordStar VisiCalc, Basic.... There were C and Pascal compilers. There were assemblers.

Because IBM was late to the PC market, it did two things that turned out to contribute mightily to its success. [The PC] was developed as a kind of skunkworks project that reported directly to the CEO of the company. And contrary to its corporate culture, it used off-the-shelf parts and software that the company didn’t write. Just how revolutionary was that for IBM?

James Cortada I cannot think of another time before then when IBM had done that. Prior to that time, they the bought a company that had something, a part or software or technology, or invented itself in its own research laboratories, which are always attached to company manufacturing facilities so they can make it manufacturable. So this is a complete departure. The reason it was done is that the IBM process for developing new equipment would take too long to get a PC out into the marketplace, and they needed to move quickly once that decision had been made and they could not do it with the existing process. So they needed a skunkworks. And that’s what Frank Cary, the chairman of the board, who ran the company, decided to do.

Steven Cherry Jim, those two factors—the skunkworks aspect and the off-the-shelf construction—also led to the downfall of IBM and the PC market. Eventually, the PC business got folded into the regular chain of command and business structures. And by using Microsoft’s operating system and Intel’s chips, without exclusive rights to them, the PC market came to be controlled by those two companies and it became a commodity business.

James Cortada It became a commodity business not only because of the chips and the operating system, but because other companies were able to put it all together at a lower cost than IBM. Once the PC business in IBM got folded into the main corporate structure, its costs of operating went up. So it’s nearly impossible to get the cost of manufacturing and sales down to a competitive level. And the marketplace also began to compete based on price. Because everybody had good machines.

Steven Cherry Selling businesses off when they became commodities is part of a pattern. It happened as well in 2002 when IBM sold its disk drive business to Hitachi at that time. This one unit was contributing to the company something like a third of its annual profits.

James Cortada The interesting thing about DASD [direct-access storage device] was IBM invented the disk drives in the mid-1950s and kept innovating that technology so fast that its product costs and what it could sell for remained very competitive for a very long time. But eventually, like everything else, it became a commodity, especially when computer chips dropped and cost to nothing. And so you could have a vast quantity of storage and minimal costs. Just look at your cell phone. So IBM decided that it’s better off with high profit items and not as well off with low profit items, even if it was still making a profit. So they decided to get out of that business and take the money that they would have otherwise spent on it on more profitable activities.

Steven Cherry US $2.6 billion from Lenovo for the PC business, $2 billion from Hitachi, with some downstream money as well. This is in sharp contrast to, say, Kodak, which when it finally sold off its film business in twenty thirteen, it was part of a bankruptcy reorganization. Similarly, GE sold off GE Capital for $26 billion after the 2008 finance and banking collapse, which is a far cry from a decade earlier, when it was worth ten times that.

James Cortada Timing is everything. What I can say about the PC and the DASD was the fact that they didn’t milk it for the very last dollar when they saw the handwriting on the wall. They knew from prior experience that you sell off that piece of the business before it’s not worth anything. And sometimes you have less than six months or a year in this industry to do that. But IBM sold these businesses off before it was too late, and that’s why it was able to gain a nice return.

The other thing that everybody overlooks, particularly with the PC business, is that it was a beautiful negotiation because it allowed IBM to enter the Chinese market in a way that China would have liked through an existing local company that was already trusted, Lenovo, and that knew how to get around and do stuff in China. So in addition to the cash transactions and transfer of people and ICAP, IBM gained access to a huge market.

Steven Cherry We’re speaking with historian Jim Cortada. When we come back, I’ll ask him to walk us through some of IBM’s most difficult moments, and to speculate about its uncertain future.

Fixing the Future is supported by COMSOL , the makers of COMSOL Multiphysics simulation software . Companies like the Manufacturing Technology Centre are revolutionizing the designs of additive manufactured parts by first building simulation apps from COMSOL models, allowing them to share their analyses with different teams and explore new manufacturing opportunities with their own customers. Learn more about simulation apps and find this and other case studies at comsol.com/blog/apps .

We’re back with my guest Jim Cortada, a senior research fellow at the University of Minnesota’s Charles Babbage Institute and author of a comprehensive corporate history of IBM.

Jim, I mentioned some of IBM’s big pivots—from tabulators to computers, from mainframes to PCs and servers, from hardware to services and consulting. In each case, the future of the entire company was at stake.

James Cortada That’s absolutely correct. When you leave—in a technology company—from one platform to another, one model business model to another, it’s very risky. Some people can do it well, others can’t. And IBM’s case, for example, when it got out of the tabulating business in the nineteen fifties, it had been in that business for a half century. And it owned it. Yet computers were clearly going to be displacing tabulating equipment. So IBM had to get it in the computer business, had to learn the technology had spent 10 years prior to that learning about the technology and participating in preliminary projects.

So when it started the transition to computers, it already knew a great deal about the subject as a question of timing, when to enter, how fast, what kind of configurations of equipment and all the basic blocking and tackling. It did that when they got into the services business in the 1980s and 1990s. Again, a very similar thing. You go from trying to sell a machines and software to selling pocking our brains, if you will, at X number of dollars per hour of consulting. Yet at the same time holding on to hardware and software sales as desirable. That, again, was a fundamental structural difference. But that had a decade of experience experimenting and learning. And even then it took in each case a decade to make the move.

Steven Cherry People don’t realize how risky these transitions are. Microsoft , for example, was late to the Internet and the Web and it almost killed the company. And then instead of learning from that experience, they were even later to the transition to mobile platforms, to cell phones and tablets.

James Cortada That’s correct. And all these companies periodically take a few years to learn how to do it. Well, first, they have to learn that they have to do it and accept it, because there are a lot of food fights within the company about whether we should go or not go. They all go through this. Then they have to learn how to do it and then they’ve got to go do it. And then convince everybody they did it. That’s Microsoft, that’s IBM. That’s all of them. Kodak failed.

Steven Cherry Jim, you were at IBM for one of these major transitions, which you describe as a corporate near-death experience. What was it like within the company to live and work through such a tumultuous period?

James Cortada Hah, you didn’t know, for example, or whether you’re going to get laid off. You didn’t know how to develop your career … should you continue along a traditional line that you had been in or start in another? And it was another … like in consulting—and I jumped into the consulting—I bet the consulting was going to grow. You had to learn a whole new profession.

So a lot of the things that you knew before did not necessarily play out. There was a lot of angst in the company about how do we do this, how do we take care of our customers, but also how do we take care of our profits and our revenue streams? Very delicate, very difficult to do. A lot of new people were brought in who did not understand IBM’s culture, and they had to learn how to deal with IBM. But at the same time, we had to figure out how to work with those folks. So they came from PWC , Arthur Andersen , on and on and on—all the all the majors. And that was very difficult to do. A lot of people didn’t make it.

Steven Cherry You were fortunate enough to spend some hours with Thomas Watson Jr. and talk with him about the initial transition from tabulators to computers. And of course, he wrote about that himself. How would you compare these two transitions—into computers on the one hand and away from computer hardware on the other?

James Cortada I would say the transition from tabulators to computers was harder, more radical. It basically required an entirely new set of technology. It required a whole new set of employees and a different business model because the revenue streams, the profit streams and so on were fundamentally different. The only thing that didn’t change was culture and the values of the company because they applied in both cases. In the case of the consulting business, the services business, IBM kept holding on to hardware, software and added consulting,

Steven Cherry IBM seemed like it was making another pivot with artificial intelligence . After winning a chess in jeopardy, it created a new division, Watson, and gave it enormous resources, especially in personnel and in marketing, even though it was pretty early to this market. It doesn’t seem like it could keep up with its competitors.

James Cortada I would argue that the company was slow to get into both cloud computing and artificial intelligence as both things were going on at the same time. And it’s the Jeopardy phenomenon you refer to. It was slow to both. And so now IBM is in a catch-up mode, particularly on the cloud side. But it has so much horsepower, so much talent on the artificial intelligence that a little bit of a drag on coming into the market has allowed it to shape a whole series of new product offerings that the others haven’t come up with, specifically industry-specific uses of artificial intelligence that played into IBM’s strength.

Steven Cherry Yeah, it is interesting to speculate, though, if the equivalent of Amazon Web services had been developed at IBM first, what would Amazon look like today and what would IBM look like?

James Cortada You know, it’s interesting because while I was at IBM, we had conversations about that. It wasn’t clear at the time how to do that because the Amazon formula was, “we’ll give cloud to anybody who wants it.” And we knew from prior experience that just being generic like that wasn’t going to work because your mother and my mother could show up and say, I want cloud computing. IBM can’t deal with small enterprises when it comes to a technology like that. It has to be for General Motors, Ford, and so on. That’s where its core strength is. So it wasn’t clear in the beginning whether that would work. Secondly, there was a lot of concern about, would people move into the cloud? Meaning that we would lose a lot of hardware, install-hardware sales, software sales. So the trade off there and nobody could quite figure out either in the industry or within IBM, but the specific cost could be as clearly as management would like. So it was fuzzy. So people kind of drag your feet a little bit. I’ll be honest,

Steven Cherry Jim, every company involved in information processing is a potential target of cyberattacks, cyberterrorism, even cyberwar. In a way, the firms we can’t afford to lose make up almost a litmus test of the most important companies. If we were to list them ourselves, it would surely include Google , Microsoft, Amazon, and Apple. Years ago, IBM would be at the top of that list. Would IBM still be on the list today?

James Cortada I believe it would be because a lot of the work that it does is behind the scenes in conference rooms and data centers that the public doesn’t see. You could go to the U.S. Department of Defense and have them put together a list and they would have on that list companies that you and I haven’t heard of. But when you ask them, well, what do they do? “Oh, yes, they definitely have to be on the list.”

IBM would be on the list because they do so much work to support the economic national infrastructure, not only in the United States, but of many, many countries. So it’s more than just the US plus also obviously its work with the military and NSA and all the other agencies. So, yeah, it would make the list. Remember IBM’s number one customer—largest customer for over a century—was the federal government, the U.S. federal government. And you and I will never know all the pieces of the business in there.

Steven Cherry I mentioned earlier GE; it was a Dow Jones company every decade of the 20th century—no other company can claim that. Yet if GE survives at all today, it will be as a much smaller firm with a much narrower mission. IBM as well keeps shrinking while its competitors are growing. In the book you note that over its long, illustrious history, IBM has generated over a trillion dollars in revenue. But that’s almost exactly the same revenue as Google—now Alphabet—in the mere 19 years from 2002 to 2020.

James Cortada Yes, but don’t judge companies simply by their revenue size. Judge them by the quality of the revenue—that is, profit. Who’s spending the money with them? IBM will be a smaller company, there’s no question about it. That doesn’t mean they’re going to be a poor company. Its profits are pretty high. Its cash flows are fabulous. It’s got a very strong balance sheet. I wouldn’t bet against IBM, but it’ll be a smaller company, there’s no question about it.

Steven Cherry Once again, my guest is historian Jim Cortada. When we come back, I’ll ask him about a surprisingly consistent pattern to each of IBM’s transitions.

But first I’d like to say how much we appreciate questions, comments, and suggestions from our listeners. For example, Chris A writes me after just about every energy-related show with thoughtful reflections that have enriched later shows. I can be reached by email at [email protected] or on Twitter @fixthefuturepod. We also welcome your rating us, especially on Apple Podcasts and Spotify. And if you go to an episode’s page on the Spectrum website, you can comment there, subscribe to alerts of new episodes, and find links to the people, places, and ideas mentioned in the show.

We’re back with IBM veteran and historian Jim Cortada. Jim, you have a set of three graphs in the book that literally chart the three biggest transitions of IBM through the decades. Maybe you can describe it.

James Cortada The three major transitions from, if you will, a product and operation point of view is the creation and selling of tabulating equipment from the 1890s to the 1950s; the second major transition is the era of the mainframe and the PC and other hardware products, from the 1950s to the end of the 1980s; and then the current period of services, both managerial consulting processes and also operational services. And that’s the period that we’re in now. Within each one of those, obviously, you get generations of hardware, generations of services. So, for example, on the services umbrella, we did out-sourcing in the 1980s and process engineering in the 1990s. Now we’re doing a hybrid cloud security and the company is doing artificial intelligence work and what have you.

I lived from the transition from the mainframe into and through and up to the artificial intelligence period of IBM. These are graphed on the chart. However, I would also add that in each case, you have different types of employees, different types of skill sets, in some cases different types of customers as well. So we could have made a number of of charts like this, but they all have in common are a couple of messages.

Number one, the transitions took a long time. So when somebody tells you IBM transitioned within two or three years, that’s nonsense. It took a decade on average in each case. The second thing I would point out is it took its customers the same amount of time because they also had to transition simultaneously with IBM. That’s why. One did it and the other one did it, too, because of new technology, new forces in the marketplace. So you’ve got that additional transition.

What the charts don’t say, but it is in the text, is that the culture of the company to a large extent remained essentially the same until the 1990s when the company decided parts of its corporate culture had atrophied and needed significant remake. That is a new type of change that IBM is undergoing right now that is hugely different from what it had in the first hundred years.

Steven Cherry Jim, your book is 621 pages, not counting its notes and excellent index—not enough books have indexes these days. You spent hundreds of hours in IBM’s own archives with the privileged access of an employee. And yet I understand that you’re still learning more about IBM each day, in part due to social media. You’re getting a lot of interesting comments on the article in Spectrum , I understand.

James Cortada Yeah, let me explain how that works, which is kind of fun. You know, there are well over 10 000 retired IBM employees on various Facebook accounts. So when an article like this comes out, either on the System 360 or the PC, I make that article available to that community through the various websites. And of course, they immediately jump on it because most of those people had personal experiences with each of those items. Right.

And it’s amazing who comes out of the woodwork. Take the PC, which was announced in 1981. IBM had been working on that product for about 18 months. Well, obviously one of the things that you do when you’re bringing a new product is figure out, well, how many copies can I sell? Well, the guy who had to come up with that was on Facebook. And so when he read the article, he said, yeah, I love the article. Oh, by the way, I was the lead forecaster on the product. And he was a little sensitive because one of the things I said in the article was IBM grossly underestimated how many PCs would be sold because everybody wanted the PC. And the minute IBM announced it it was just off the charts. He came back with a little response saying, well, my bosses reduced the forecast. And he didn’t want to talk about it anymore. So there’s a mystery out there, but we wouldn’t have known any of that, right? Is this tantalizing—more research to be done as a result of that little comment?

Steven Cherry That’s fantastic. Well, Jim, it’s a remarkable story of a remarkable company, remarkably well told. Thanks for writing it and for joining us today.

James Cortada Thank you. It’s been a pleasure.

Steven Cherry We’ve been speaking with IBM veteran and Ph.D. historian James Cortada, author of the 2019 book IBM: The Rise and Fall and Reinvention of a Global Icon , about IBM’s glorious past, struggling present, and challenging future.

Fixing the Future is sponsored by COMSOL , makers of mathematical modeling software and a longtime supporter of IEEE Spectrum as a way to connect and communicate with engineers.

Fixing the Future is brought to you by IEEE Spectrum , the member magazine of the Institute of Electrical and Electronic Engineers, a professional organization dedicated to advancing technology for the benefit of humanity.

This interview was recorded July 21, 2021, on Adobe Audition via Skype, and edited in Audacity. Our theme music is by Chad Crouch.

You can subscribe to Fixing the Future on Spotify, Stitcher, Apple, and wherever else you get your podcasts, or listen on the Spectrum website, which also contains transcripts of all our episodes. We welcome your feedback on the web or in social media.

For Fixing the Future, I’m Steven Cherry .

Paul Cusch

FYI .. your introduction mentioned the first two IBM Grand Challenges .. 'Deep Blue' and 'Jeopardy!' .. however you forgot to mention the newest and most amazing of all, IBM's 'Project Debater' grand challenge. This is the first AI system that can *debate* humans on complex topics. I think this is quite relevant as it demonstrates IBM's focused attention to cutting edge is very current.

U.S. Commercial Drone Delivery Comes Closer

Zipline’s keenan wyrobek talks about two recent milestones, about fixing the future.

ibm stretch project failure case study

See all Fixing the Future episodes →

62 episodes

Subscribe & Listen

Listen to Fixing the Future in your favorite podcast player

ibm stretch project failure case study

Listen Next

Heat pumps go north, advances could bring these energy-efficient devices to many more homes, exploding chips, meta's ar hardware, and more, ieee spectrum visits isscc, the key conference in integrated circuit tech, more signal, less noise: fixing the future with stephen cass.

Sign up to be alerted when

the next episode drops.

CodersOnFire—Blog

CodersOnFire—Blog

The Most Versatile Nearshore Software Development Company

Outsourcing Software Projects Made IBM an Amazing $135 Billion | Case Study

Outsourcing Software Projects Made IBM an Amazing $135 Billion | Case Study

Since its inception in 1911, International Business Machines Corporation (IBM) has been a stalwart in the technology and consulting industry. As the company navigated the complexities of global expansion, IBM encountered challenges that prompted a strategic shift towards outsourcing software projects. This case study outlines the multifaceted journey of IBM’s global outsourcing strategy, exploring the intricacies, challenges, and practical steps that fueled the company’s ascent to operational excellence toward $135 Billion.

Table of Contents

10 critical challenges during software projects outsourcing.

In the latter part of the 20th century, IBM grappled with the challenges of managing a sprawling, global operation. The company faced issues related to the scalability of its workforce, escalating operational costs, and the imperative to innovate in an increasingly competitive landscape. IBM soon recognized the need for a transformative solution and initiated a strategic plan to outsource its software projects.

1. Limited Scalability:

  • Challenge: IBM faced constraints in scaling its workforce to meet the demands of a rapidly expanding global market.
  • Resolution: IBM overcame scalability limitations by strategically tapping into a global talent pool. Outsourcing allowed the company to flexibly scale its operations based on project requirements and market dynamics.

2. Escalating Operational Costs:

  • Challenge: The operational costs associated with maintaining an extensive in-house workforce became increasingly burdensome.
  • Resolution: Outsourcing non-core functions, especially certain IT services, enabled IBM to optimize its cost structure. The company redirected resources towards high-impact areas, resulting in significant cost savings.

3. Lack of Specialized Expertise:

  • Challenge: IBM faced challenges accessing specialized expertise required for specific projects and emerging technologies.
  • Resolution: Outsourcing to regions with a concentration of skilled professionals gave IBM access to diverse and specialized expertise. This strategic move ensured the availability of talent aligned with the evolving technological landscape for individual software projects.

4. Inefficient Project Delivery:

  • Challenge: In-house constraints led to delays in project delivery, hindering IBM’s ability to meet market demands.
  • Resolution: The strategic allocation of software projects to specialized outsourcing partners improved project delivery times. This efficiency became a key driver in enhancing IBM’s responsiveness to market needs.

5. Competitive Industry Dynamics:

  • Challenge: IBM needed to stay competitive in a rapidly evolving industry with emerging players and disruptive technologies.
  • Resolution: Outsourcing allowed IBM to stay nimble and innovative. The company redirected its focus towards core competencies and high-value services, enabling it to outpace competitors and lead in emerging technology trends.

6. Cultural Differences and Global Collaboration:

  • Challenge: Operating in diverse global markets presented challenges in managing cultural differences and fostering effective collaboration.
  • Resolution: IBM’s strategic partnerships were forged with an emphasis on clear communication and mutual understanding. The company implemented collaborative tools and frameworks to bridge cultural gaps and ensure seamless global collaboration for their software project.

7. Resource Allocation Challenges:

  • Challenge: Allocating resources effectively to balance various operational needs proved challenging within IBM’s expansive and diversified portfolio.
  • Resolution: Outsourcing non-core functions allowed IBM to reallocate resources towards areas that demanded strategic focus. This streamlined resource allocation contributed to improved overall organizational efficiency.

8. Adapting to Technological Advances:

  • Challenge: Keeping pace with rapid technological advancements presented a fierce challenge for IBM’s in-house teams.
  • Resolution: Strategic outsourcing enabled IBM to tap into external expertise and adapt swiftly to emerging technologies. Outsourcing partners often brought fresh perspectives and cutting-edge solutions, enhancing IBM’s capacity for innovation.

9. Customer-Centricity Challenges:

  • Challenge: Maintaining a customer-centric approach amid operational challenges required a tactical reevaluation.
  • Resolution: By outsourcing non-core functions, IBM freed up internal resources to focus on customer-centric initiatives. The company could prioritize customer needs and deliver tailored solutions with greater efficiency.

10. Risk Management and Contingency Planning:

  • Challenge: Managing risks associated with global operations and ensuring effective contingency planning demanded a robust strategy.
  • Resolution: Outsourcing partners often bring a level of risk diversification. Collaborative risk management strategies were established, ensuring that IBM had contingency plans in place to address unforeseen challenges in various regions.

IBMs Strategic Outsourcing Initiatives

1. Identifying Global Talent Pools:

IBM strategically identified regions with a surplus of skilled professionals, capitalizing on diverse talent pools by outsourcing their software projects. This approach allowed the company to harness specialized expertise while mitigating the challenges of maintaining an extensive in-house workforce.

2. Focus on Core Competencies:

One of the critical steps IBM took was to streamline its operations by outsourcing non-core functions. By offloading tasks that were not central to its expertise, such as certain IT services, the company redirected resources towards its core competencies, fostering a more efficient and focused organizational structure.

3. Strategic Vendor Selection:

IBM undertook a meticulous process in selecting outsourcing partners for their software projects. The company sought vendors with a proven track record, domain expertise, and a commitment to shared values. This emphasis on strategic partnerships laid the foundation for successful collaborations that extended beyond mere transactional relationships.

4. Establishing Clear Communication Channels:

Effective communication was paramount to the success of IBM’s outsourcing strategy. The company established transparent and open communication channels with its outsourcing partners, ensuring alignment in objectives, timelines, and expectations. Regular updates and feedback loops were integral to maintaining a collaborative working environment.

5. Implementing Elegant Procedures:

IBM embraced agile methodologies in its outsourcing processes, enhancing flexibility and responsiveness to market dynamics. This iterative approach not only expedited the delivery of their software projects but also facilitated a more adaptive and customer-centric organizational culture.

The Impact of Outsourcing Software Projects

software projects COF

1. Cost Efficiency and Competitive Pricing:

IBM’s strategic outsourcing initiatives yielded significant cost efficiencies. By reducing operational overheads associated with in-house functions, the company could offer more competitive pricing for its products and services—this, in turn, contributed to the expansion of its customer base.

2. Profitability Surge Through Value-Added Services:

The optimized cost structure allowed IBM to reallocate resources towards high-margin, value-added services. This shift in focus played a pivotal role in the surge of profitability, as the company positioned itself as a provider of innovative solutions with substantial market demand.

3. Market Leadership Reinforcement:

IBM’s commitment to operational excellence through outsourcing solidified its position as a market leader. The company’s enhanced agility and capacity for innovation attracted a diverse clientele, reinforcing its dominance in the technology and consulting landscape.

4. Continuous Evolution and Adaptation:

IBM’s outsourcing strategy was not a one-time fix but a continuous journey of adaptation and evolution. The company proactively embraced emerging technologies, continually refined its outsourcing partnerships, and adapted its strategies to align with evolving market trends.

5. Learning from Setbacks:

Throughout its outsourcing journey, IBM encountered challenges and setbacks. Whether navigating cultural differences with outsourcing partners or managing the complexities of large-scale collaborations, the company consistently learned from its experiences. Continuous improvement became a cornerstone of IBM’s outsourcing strategy.

Final Verdict

IBM’s case exemplifies the transformative power of a well-executed global outsourcing strategy. By leveraging global talent, optimizing costs, and fostering collaborative partnerships, IBM addressed its initial operational challenges and evolved into a resilient, customer-focused industry leader.

The case of IBM serves as a comprehensive guide for organizations seeking sustained success through strategic outsourcing, emphasizing the importance of adaptability, clear communication, and a relentless commitment to operational excellence.

You, too, can adopt the strategy and outsource your software projects to an established industry-leading nearshore software development company like CodersOnFire.

Visit our website for more: COF

Learn more about our parent company : Fission.it

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Related Posts

How Apple Achieved a Legendary $2.98 Trillion Fortune by Outsourcing | Case Study

How Apple Achieved a Legendary $2.98 Trillion Fortune by Outsourcing | Case Study

IMAGES

  1. project failure case studies and suggestion

    ibm stretch project failure case study

  2. Early Signs of Project Failure and How to Resolve by Project Rescue

    ibm stretch project failure case study

  3. project failure case studies and suggestion

    ibm stretch project failure case study

  4. Why Do IT Projects Fail Miserably? [Infographic]

    ibm stretch project failure case study

  5. Table 2 from Project Failure Case Studies and Suggestion

    ibm stretch project failure case study

  6. Project failure Presentation IT Project Failure Case Study

    ibm stretch project failure case study

VIDEO

  1. Yeh Munh Phulaa Ke Kiyun Bethi Hai...? #dananeermobeen #ameergilani

  2. SERVICES FAILURE (CASE STUDY SERVICE MARKETING)

  3. Who Destroyed FrontRow ?

  4. Discover knowledge why Project Fail

  5. IBM's Transformation

  6. IBM Test Management Tutorial: How to create a defect from a failed test case

COMMENTS

  1. Fifty years later, IBM's inventors celebrate the 'Stretch'

    Judged a failure at the time, Big Blue's 7030 supercomputer left a rich, albeit indirect legacy for the rest of the computer industry. Fifty years later, IBM's inventors celebrate the 'Stretch' - CNET

  2. Lessons Learned: IT's Biggest Project Failures

    IBM's original bid to Los Alamos was to develop a computer 100 times faster than the system it was meant to replace, and the Stretch came in only 30 to 40 times faster. Because it failed to meet ...

  3. 10 Project Failures That Stunned the World

    In fact the cost of the project ended up being 15 times more than was originally budgeted and took 10 years longer. It serves as one of history's greatest project failures. 2. Betamax. The Betamax device was an analogue video recording device that was first brought to the United States in 1975.

  4. IBM Stretches its Capabilities

    IBM Stretches its CapabilitiesAim high. Outrun the competition. That was the inspiration behind IBM's "Stretch" (Model 7030). Because the 7030 didn't meet IBM's ambitious performance objective of "100 times the IBM 704," then the company's fastest computer, it was branded a failure. Yet "Stretch" was the world's fastest computer when completed in 1961, introducing a ...

  5. IBM Stretch: The Forgotten Computer that Helped Spark a Revolution

    The Success of Stretch: Even though initial commercial expectations were not fully met, the technical, manufacturing, and managerial experience that came from creating Stretch fed directly into other IBM projects, including its System/360 - the single most successful family of computers (by revenue) of all time.

  6. PDF Why do projects fail?

    the failure How? IBM's 7030 Stretch project Uncertainty Unknowns The product concept in the case of this project was a critical entity in its ecosystem. The project was to produce a commercially viable product from a blue print concept. From the facts documented about the project, whether the concept of Stretch was to

  7. September 5: The Last IBM STRETCH Supercomputer Is Shut Down

    september 5, 1980 The Last IBM STRETCH Supercomputer Is Shut Down. The last IBM 7030, or STRETCH, mainframe computer is decommissioned at Brigham Young University. STRETCH was the result of an intensive R&D project started at IBM in 1955. The goal: build a super-computer 100 to 200 times as powerful as anything yet yet built.

  8. IBM retires 7030 "STRETCH" computer, June 21, 1981

    Advertisement. On June 21, 1981, IBM retired its "STRETCH" 7030 computer, a machine considered a failure by some but that brought about tremendous innovation. The 7030 premiered in 1961, performing at speeds approximately 30 times faster than the most advanced computer in the world at that time. Unfortunately, even with such advances, the ...

  9. IBM 7030 Stretch

    The IBM 7030, also known as Stretch, was IBM's first transistorized supercomputer.It was the fastest computer in the world from 1961 until the first CDC 6600 became operational in 1964.. Originally designed to meet a requirement formulated by Edward Teller at Lawrence Livermore National Laboratory, the first example was delivered to Los Alamos National Laboratory in 1961, and a second ...

  10. IBM Case Study: Why do the projects fail?

    Its concept was to do an autopsy on the 40 worst cases that led the project to failure in the history of IBM worldwide. The Instructors turned in one "case study" for each of the 40 students.

  11. IBM Stretch

    IBM's STRETCH Supercomputer by John C. Dvorak When few others at IBM thought it was important, Stephen W. (Red) Dunwell conceived of a computer that would be a "giant step" forward not only for IBM, but for computer design in general. And with a combination of vision, chutzpah, and patience, he led a project that designed and built the world's fastest and most technically advanced ...

  12. The main reason for IT failures

    Similar factors were, for example, in evidence during one of the first and most famous project management failures in computing history: the IBM 7030 Stretch supercomputer. Begun in 1956, IBM's goal was to build a machine at least one hundred times more powerful than its previous system, the IBM 704.

  13. Solved Project Description and Failure:The IBM Stretch

    However, the project was considered a failure because it fell short of its initial good of being 100 times faster than the system it was meant to replace Stretch only achieved speeds 30 to 40 times faster, Consequently, IBM had to reduce the price from $13.5 million to $78 million, selling it below cast.

  14. IT's biggest project failures

    Nevertheless, the 7030 was considered a failure. IBM's original bid to Los Alamos was to develop a computer 100 times faster than the system it was meant to replace, and the Stretch came in only ...

  15. The IBM Stretch and why it failed.docx

    The IBM Stretch In 1956, a group of ambitious computer scientists set out to build the fastest super-computer the world had ever seen. It took them 5 years and a lot of man-hours, but they eventually produced the IBM 7030 affectionately known as 'Stretch'. Stretch was the fastest super- computer in the world, and held on to this title all the way to 1964.

  16. 12 Notorious Failed Projects & What We Can Learn from Them

    Let's look at the most notorious failed projects, not to gloat, but to see what they can tell us about project management. 1. Sony Betamax. The word Betamax has become almost synonymous with failure. But when it was first released, Betamax was supposed to become the leader in the cassette recording industry.

  17. Assignment A

    Real word project assignment assignment stretch project assignment evaluation of project inf30029 hold copy of this assignment that can be produced if the. ... IBM'S S TRET CH PR O JECT. ASSIGNMENT A - EV AL U A TION OF A REAL-W ORLD PROJ ECT. ... Project Scope Management Case Study. Information Technology Project Management 100% (4) 2.

  18. IBM's Fall From World Dominance

    IBM's Fall From World Dominance. 10. 30. 00:00 25:55. Steven Cherry Hi, this is Steven Cherry for IEEE Spectrum's podcast, Fixing the Future. IBM is a remarkable company, known for many things ...

  19. Project Recovery: Case Studies and Techniques for Overcoming Project

    3.5 PROLOGUE TO THE Iridium Case Study 52. 3.6 Rise, Fall and Resurrection of Iridium 52. Naming the Project "Iridium" 55. Obtaining Executive Support 55. ... 6.1 IT's Biggest Failures 217. IBM's Stretch Project 217. Knight-Ridder's Viewtron Service 218. DMV Projects—California and Washington 218.

  20. IBM Stretch Case Study

    IBM's first attempt at creating a transistor based, advanced scientific computing machine lost the company nearly $30 million in the 60's. Yet, it might have been the most revolutionary system ever. This paper will cover the Stretch project including: History Although the transistor effect had been discovered in 1947 at Bell Labs (Spicer ...

  21. Project failure (docx)

    Conclusions: Therefore, from the study, it can be concluded that a lack of proper design and understanding of the project, and insufficient planning, directed the downfall of IBM 7030 and was reviewed as a massive project failure. However, observation of this case helped future projects to consider certain factors in project implementation.

  22. PDF Case Study: IBM Strengthens Focus on Project Management

    IBM has made an ongoing commitment to project management excellence. The journey began in the early to mid-nineties when we transformed our culture and support systems to improve business posture. The company took bold steps in how to organize, execute and track work. We also began to group work into projects that produced services, products ...

  23. Outsourcing Software Projects Made IBM $135B

    Outsourcing Software Projects Made IBM an Amazing $135 Billion | Case Study. Since its inception in 1911, International Business Machines Corporation (IBM) has been a stalwart in the technology and consulting industry. As the company navigated the complexities of global expansion, IBM encountered challenges that prompted a strategic shift ...