- Cloud Computing
AWS Case Studies: Services and Benefits in 2024
Home Blog Cloud Computing AWS Case Studies: Services and Benefits in 2024
With its extensive range of cloud services, Amazon Web Services (AWS) has completely changed the way businesses run. Organisations demonstrate how AWS has revolutionized their operations by enabling scalability, cost-efficiency, and innovation through many case studies. AWS's computing power, storage, database management, and artificial intelligence technologies have benefited businesses of all sizes, from startups to multinational corporations. These include improved security, agility, worldwide reach, and lower infrastructure costs. With Amazon AWS educate program it helps businesses in various industries to increase growth, enhance workflow, and maintain their competitiveness in today's ever-changing digital landscape. So, let's discuss the AWS cloud migration case study and its importance in getting a better understanding of the topic in detail.
What are AWS Case Studies, and Why are They Important?
The AWS case studies comprehensively explain how companies or organizations have used Amazon Web Services (AWS) to solve problems, boost productivity, and accomplish objectives. These studies provide real-life scenarios of Amazon Web Services (AWS) in operation, showcasing the wide range of sectors and use cases in which AWS can be successfully implemented. They offer vital lessons and inspiration for anyone considering or already using AWS by providing insights into the tactics, solutions, and best practices businesses use the AWS Cloud Engineer program . The Amazon ec2 case study is crucial since it provides S's capabilities, assisting prospective clients in comprehending the valuable advantages and showcasing AWS's dependability, scalability, and affordability in fostering corporate innovation and expansion.
What are the Services Provided by AWS, and What are its Use Cases?
The case study on AWS in Cloud Computing provided and its use cases mentioned:
Elastic Compute Cloud (EC2) Use Cases
Amazon Elastic Compute Cloud (EC2) enables you to quickly spin up virtual computers with no initial expenditure and no need for a significant hardware investment. Use the AWS admin console or automation scripts to provision new servers for testing and production environments promptly and shut them down when not in use.
AWS EC2 use cases consist of:
- With options for load balancing and auto-scaling, create a fault-tolerant architecture.
- Select EC2 accelerated computing instances if you require a lot of processing power and GPU capability for deep learning and machine learning.
Relational Database Service (RDS) Use Cases
Since Amazon Relational Database Service (Amazon RDS) is a managed database service, it alleviates the stress associated with maintaining, administering, and other database-related responsibilities.
AWS RDS uses common cases, including:
- Without additional overhead or staff expenditures, a new database server can be deployed in minutes and significantly elevate dependability and uptime. It is the perfect fit for complex daily database requirements that are OLTP/transactional.
- RDS should be utilized with NoSQL databases like Amazon OpenSearch Service (for text and unstructured data) and DynamoDB (for low-latency/high-traffic use cases).
AWS Workspaces
AWS offers Amazon Workspaces, a fully managed, persistent desktop virtualization service, to help remote workers and give businesses access to virtual desktops within the cloud. With it, users can access the data, apps, and resources they require from any supported device, anywhere, at any time.
AWS workspaces use cases
- IT can set up and manage access fast. With the web filter, you can allow outgoing traffic from a Workspace to reach your chosen internal sites.
- Some companies can work without physical offices and rely solely on SaaS apps. Thus, there is no on-premises infrastructure. They use cloud-based desktops via AWS Workspaces and other services in these situations.
AWS Case Studies
Now, we'll be discussing different case studies of AWS, which are mentioned below: -
Case Study - 1: Modern Web Application Platform with AWS
American Public Media, the programming section of Minnesota Public Radio, is one of the world's biggest producers and distributors of public television. To host their podcast, streaming music, and news websites on AWS, they worked to develop a proof of concept.
After reviewing an outdated active-passive disaster recovery plan, MPR decided to upgrade to a cloud infrastructure to modernize its apps and methodology. This infrastructure would need to be adaptable to changes within the technology powering their apps, scalable to accommodate their audience growth, and resilient to support their disaster recovery strategy.
MPR and AWS determined that MPR News and the public podcast websites should be hosted on the new infrastructure to show off AWS as a feasible choice. Furthermore, AWS must host multiple administrative apps to demonstrate its private cloud capabilities. These applications would be an image manager, a schedule editor, and a configuration manager.
To do this, AWS helped MPR set up an EKS Kubernetes cluster . The apps would be able to grow automatically according to workload and traffic due to the cluster. AWS and MPR developed Elasticsearch at Elastic.co and a MySQL instance in RDS to hold application data.
Business Benefits
Considerable cost savings were made possible by the upgraded infrastructure. Fewer servers would need to be acquired for these vital applications due to the decrease in hardware requirements. Additionally, switching to AWS made switching from Akamai CDN to CloudFront simple. This action reduced MPR's yearly expenses by thousands.
Case Study - 2: Platform Modernisation to Deploy to AWS
Foodsby was able to proceed with its expansion goals after receiving a $6 million investment in 2017, but it still needed to modernize its mobile and web applications. For a faster time to launch to AWS, they improved and enhanced their web, iOS, and Android applications.
Sunsetting technology put this project on a surged timeline. Selecting the mobile application platform required serious analysis and expert advice to establish consensus across internal stakeholders.
Improving the creation of front-end and back-end web apps that separated them into microservices to enable AWS hosting, maximizing scalability. Strengthening recommended full Native for iOS and Android and quickly creating and implementing that solution.
Case Study - 3: Cloud Platform with Kubernetes
SPS Commerce hired AWS to assist them with developing a more secure cloud platform, expanding their cloud deployment choices through Kubernetes, and educating their engineers on these advanced technologies.
SPS serves over 90,000 retail, distribution, grocery, and e-commerce businesses. However, to maintain its growth, SPS needs to remove obstacles to deploying new applications on AWS and other cloud providers in the future. They wanted a partner to teach their internal development team DevOps principles and reveal them to Kubernetes best practices, even though they knew Kubernetes would help them achieve this.
To speed up new project cycle times, decrease ramp-up times, and improve the team's Kubernetes proficiency, it assisted with developing a multi-team, Kubernetes-based platform with a uniform development method. The standards for development and deployment and assisted them in establishing the deployment pipeline.
Most teams can plug, play, and get code up and running quickly due to the streamlined deployment interface. SPS Commerce benefits from Kubernetes' flexibility and can avoid vendor lock-in, which they require to switch cloud providers.
Case Study - 4: Using Unified Payment Solutions to Simplify Government Services
The customer, who had a portfolio of firms within its authority, needed to improve experience to overcome the difficulty of combining many payment methods into a single, unified solution.
Due to the customers' varied acquisitions, the payment system landscape became fragmented, making it more difficult for clients to make payments throughout a range of platforms as well as technologies. Providing a streamlined payment experience could have been improved by this lack of coherence and standardization.
It started developing a single, cloud-based payment system that complies with the customers' microservices-based reference design. CRUD services were created after the user interface for client administration was set at the beginning of the project.
With this, the customer can streamline operations and increase efficiency by providing a smooth payment experience.
The new system demonstrated a tremendous improvement over the old capability, demonstrating the ability to handle thousands of transactions per second.
Maintaining system consistency and facilitating scalability and maintenance were made more accessible by aligning with the reference architecture.
Case Study - 5: Accelerated Data Migration to AWS
They selected improvements to create an AWS cloud migration case study cloud platform to safely transfer their data from a managed service provider to AWS during the early phases of a worldwide pandemic.
Early in 2020, COVID-19 was discovered, and telemedicine services were used to lessen the strain on hospital infrastructure. The number of telehealth web queries increased dramatically overnight, from 5,000 to 40,000 per minute. Through improvement, Zipnosis was able to change direction and reduce the duration of its AWS migration plan from six to three months. The AWS architecture case study includes HIPAA, SOC2, and HITRUST certification requirements. They also wanted to move their historic database smoothly across several web-facing applications while adhering to service level agreements (SLAs), which limited downtime.
Using Terraform and Elastic Kubernetes Service, the AWS platform creates a modern, infrastructure-as-code, HIPAA-compliant, and HITRUST-certified environment. With the help of serverless components, tools were developed to roll out an Application Envelope, enabling the creation of a HIPAA-compliant environment that could be activated quickly.
Currently, Zipnosis has internal platform management. Now that there is more flexibility, scaling up and down is more affordable and accessible. Their services are more marketable to potential clients because of their scalable, secure, and efficient infrastructure. Their use of modern technologies, such as Kubernetes on Amazon EKS, simplifies hiring top people. Zipnosis is in an excellent position to move forward.
Case Study - 6: Transforming Healthcare Staffing
The customer's outdated application presented difficulties. It was based on the outdated DBROCKET platform and needed an intuitive user interface, testing tools, and extensibility. Modernizing the application was improving the job and giving the customer an improved, scalable, and maintainable solution.
Although the customer's old application was crucial for predicting hospital staffing needs, maintenance, and improvements were challenging due to its reliance on the obscure DBROCKET platform. Hospitals lost money on inefficient staff scheduling due to the application's lack of responsiveness and a mobile-friendly interface.
Choosing Spring Boot and Groovy for back-end development to offer better maintainability and extensibility throughout the improved migration of the application from DBROCKET to a new technology stack. Unit tests were used to increase the reliability and standard of the code.
Efficiency at Catalis increased dramatically when the advanced document redaction technology was put in place. They were able to process papers at a significantly higher rate because the automated procedure cut down the time and effort needed for manual redaction.
Catalis cut infrastructure costs by utilizing serverless architecture and cloud-based services. They saved a significant amount of money because they were no longer required to upgrade and maintain on-premises servers.
The top-notch Knowledgehut best Cloud Computing courses that meet different demands and skill levels are available at KnowledgeHut. Through comprehensive curriculum, hands-on exercises, and expert-led instruction, attendees may learn about and gain practical experience with cloud platforms, including AWS, Azure, Google Cloud, and more. Professionals who complete these courses will be efficient to succeed in the quickly developing sector of cloud computing.
Finally, a case study of AWS retail case studies offers a range of features and advantages. These studies show how firms in various industries use AWS for innovation and growth, from scalability to cost efficiency. AWS offers a robust infrastructure and a range of technologies to satisfy changing business needs, whether related to improving customer experiences with cloud-based solutions or streamlining processes using AI and machine learning. These case studies provide substantial proof of AWS's influence on digital transformation and the success of organizations.
Frequently Asked Questions (FAQs)
From the case study of Amazon web services, companies can learn how other businesses use AWS services to solve real-world problems, increase productivity, cut expenses, and innovate. For those looking to optimize their cloud strategy and operations, these case studies provide insightful information, optimal methodologies, and purpose.
You can obtain case studies on AWS through the AWS website, which has a special section with a large selection of case studies from different industries. In addition, AWS releases updated case studies regularly via various marketing platforms and on its blog.
The case study of Amazon web services, which offers specific instances of how AWS services have been successfully applied in various settings, can significantly assist in the decision-making process for IT initiatives. Project planning and strategy can be informed by the insights, best practices, and possible solutions these case studies provide.
Kingson Jebaraj
Kingson Jebaraj is a highly respected technology professional, recognized as both a Microsoft Most Valuable Professional (MVP) and an Alibaba Most Valuable Professional. With a wealth of experience in cloud computing, Kingson has collaborated with renowned companies like Microsoft, Reliance Telco, Novartis, Pacific Controls UAE, Alibaba Cloud, and G42 UAE. He specializes in architecting innovative solutions using emerging technologies, including cloud and edge computing, digital transformation, IoT, and programming languages like C, C++, Python, and NLP.
Avail your free 1:1 mentorship session.
Something went wrong
Upcoming Cloud Computing Batches & Dates
Case studies
Companies have applied serverless architectures to use cases from stock trade validation to e-commerce website construction to natural language processing. AWS serverless portfolio offers the flexibility to create a wide array of applications, including those requiring assurance programs such as PCI or HIPAA compliance.
The following sections illustrate some of the most common use cases but are not a comprehensive list. For a complete list of customer references and use case documentation, see Serverless Computing .
Serverless websites, web Apps, and mobile backends
Serverless approaches are ideal for applications where the load can vary dynamically. Using a serverless approach means no compute costs are incurred when there is no end-user traffic while still offering instant scale to meet high demand, such as a flash sale on an e-commerce site or a social media mention that drives a sudden wave of traffic.
Compared to traditional infrastructure approaches, it is also often significantly less expensive to develop, deliver, and operate a web or mobile backend when architected in a serverless fashion.
AWS provides the services developers need to construct these applications rapidly:
Amazon Simple Storage Service (Amazon S3) and AWS Amplify offer a simple hosting solution for static content.
AWS Lambda, in conjunction with Amazon API Gateway, provides support for dynamic API requests using functions.
Amazon DynamoDB offers a simple storage solution for the session and per-user state.
Amazon Cognito provides an easy way to handle end-user registration, authentication, and access control to resources.
Developers can use AWS Serverless Application Model (SAM ) to describe the various elements of an application.
AWS CodeStar can set up a CI/CD toolchain with just a few clicks.
To learn more, see the whitepaper AWS Serverless Multi-Tier Architectures , which provides a detailed examination of patterns for building serverless web applications. For complete reference architectures, see Serverless Reference Architecture for creating a Web Application and Serverless Reference Architecture for creating a Mobile Backend on GitHub.
Customer example â Neiman Marcus
A luxury household name, Neiman Marcus has a reputation for delivering a first-class, personalized customer service experience. To modernize and enhance that experience, the company wanted to develop Connect, an omnichannel digital selling application that would empower associates to view rich, personalized customer information with the goal of making each customer interaction unforgettable.
Choosing a serverless architecture with mobile development solutions on Amazon Web Services (AWS) enabled the development team to launch the app much faster than in the 4 months it had originally planned. âUsing AWS cloud-native and serverless technologies, we increased our speed to market by at least 50 percent and were able to accelerate the launch of Connect,â says Sriram Vaidyanathan, senior director of omni engineering at Neiman Marcus.
This approach also greatly reduced app-building costs and provided developers with more agility for the development and rapid deployment of updates. The app elastically scales to support traffic at any volume for greater cost efficiency, and it has increased associate productivity. For more information, see the Neiman Marcus case study .
IoT backends
The benefits that a serverless architecture brings to web and mobile apps make it easy to construct IoT backends and device-based analytic processing systems that seamlessly scale with the number of devices.
For an example reference architecture, see Serverless Reference Architecture for creating an IoT Backend on GitHub.
Customer example â iRobot
iRobot, which makes robots such as the Roomba cleaning robot, uses AWS Lambda in conjunction with the AWS IoT service to create a serverless backend for its IoT platform. As a popular gift on any holiday, iRobot experiences increased traffic on these days.
While huge traffic spikes could also mean huge headaches for the company and its customers alike, iRobotâs engineering team doesnât have to worry about managing infrastructure or manually writing code to handle availability and scaling by running on serverless. This enables them to innovate faster and stay focused on customers. Watch the AWS re:Invent 2020 video Building the next generation of residential robots for more information.
Data processing
The largest serverless applications process massive volumes of data, much of it in real-time. Typical serverless data processing architectures use a combination of Amazon Kinesis and AWS Lambda to process streaming data, or they combine Amazon S3 and AWS Lambda to trigger computation in response to object creation or update events.
When workloads require more complex orchestration than a simple trigger, developers can use AWS Step Functions to create stateful or long-running workflows that invoke one or more Lambda functions as they progress. To learn more about serverless data processing architectures, see the following on GitHub:
Serverless Reference Architecture for Real-time Stream Processing
Serverless Reference Architecture for Real-time File Processing
Image Recognition and Processing Backend reference architecture
Customer example â FINRA
The Financial Industry Regulatory Authority (FINRA) used AWS Lambda to build a serverless data processing solution that enables them to perform half a trillion data validations on 37 billion stock market events daily.
In his talk at AWS re:Invent 2016 entitled The State of Serverless Computing (SVR311) , Tim Griesbach, Senior Director at FINRA, said, âWe found that Lambda was going to provide us with the best solution for this serverless cloud solution. With Lambda, the system was faster, cheaper, and more scalable. So at the end of the day, weâve reduced our costs by over 50 percent, and we can track it daily, even hourly.â
Customer example â Toyota Connected
Toyota Connected is a subsidiary of Toyota and a technology company offering connected platforms, big data, mobility services and other automotive-related services.
Toyota Connected chose serverless computing architecture to build its Toyota Mobility Services Platform, leveraging AWS Lambda, Amazon Kinesis Data Streams (Amazon KDS), and Amazon S3 to offer personalized, localized, and predictive data to enhance the driving experience.
With its serverless architecture, Toyota Connected seamlessly scaled to 18 times its usual traffic volume, with 18 billion transactions per month running through the platform, reducing aggregation job times from 15+ hours to 1/40th of the time while reducing operational burden. Additionally, serverless enabled Toyota Connected to deploy the same pipeline in other geographies with smaller volumes and only pay for the resources consumed.
For more information, read our Big Data Blog on Toyota Connected or watch the re:Invent 2020 video Reimagining mobility with Toyota Connected (AUT303) .
AWS Lambda is a perfect match for many high-volume, parallel processing workloads. For an example of a reference architecture using MapReduce, see Reference Architecture for running serverless MapReduce jobs .
Customer example â Fannie Mae
Fannie Mae, a leading source of financing for mortgage lenders, uses AWS Lambda to run an âembarrassingly parallelâ workload for its financial modeling. Fannie Mae uses Monte Carlo simulation processes to project future cash flows of mortgages that help manage mortgage risk.
The company found that its existing HPC grids were no longer meeting its growing business needs. So Fannie Mae built its new platform on Lambda, and the system successfully scaled up to 15,000 concurrent function executions during testing. The new system ran one simulation on 20 million mortgages completed in 2 hours, which is three times faster than the old system. Using a serverless architecture, Fannie Mae can run large-scale Monte Carlo simulations effectively because it doesnât pay for idle compute resources. It can also speed up its computations by running multiple Lambda functions concurrently.
Fannie Mae also experienced shorter than typical time-to-market because they were able to dispense with server management and monitoring, along with the ability to eliminate much of the complex code previously required to manage application scaling and reliability. See the Fannie Mae AWS Summit 2017 presentation SMC303: Real-time Data Processing Using AWS Lambda for more information.
IT automation
Serverless approaches eliminate the overhead of managing servers, making most infrastructure tasks, including provisioning, configuration, management, alarms/monitors, and timed cron jobs, easier to create and manage.
Customer example â Autodesk
Autodesk, which makes 3D design and engineering software, uses AWS Lambda to automate its AWS account creation and management processes across its engineering organization.
Autodesk estimates that it realized cost savings of 98 percent (factoring in estimated savings in labor hours spent provisioning accounts). It can now provision accounts in just 10 minutes instead of the 10 hours it took to provision with the previous, infrastructure-based process.
The serverless solution enables Autodesk to automatically provision accounts, configure and enforce standards, and run audits with increased automation and fewer manual touchpoints. For more information, see the Autodesk AWS Summit 2017 presentation SMC301: The State of Serverless Computing . Visit GitHub to see the Autodesk Tailor service.
Machine learning
You can use serverless services to capture, store, and preprocess data before feeding it to your machine learning model. After training the model, you can also serve the model for prediction at scale for inference without providing or managing any infrastructure.
Customer example â Genworth
Genworth Mortgage Insurance Australia Limited is a leading provider of lendersâ mortgage insurance in Australia. Genworth has more than 50 years of experience and data in this industry and wanted to use this historical information to train predictive analytics for loss mitigation machine learning models.
To achieve this task, Genworth built a serverless machine learning pipeline at scale using services like AWS Glue, a serverless managed ETL processing service to ingest and transform data, and Amazon SageMaker to batch transform jobs and, perform ML inference, and process and publish the results of the analysis.
With the ML models, Genworth could analyze recent repayment patterns for each insurance policy to prioritize them in likelihood and impact for each claim. This process was automated end-to-end to help the business make data-driven decisions and simplify high-value manual work performed by the Loss Mitigation team. Read the Machine Learning blog How Genworth built a serverless ML pipeline on AWS using Amazon SageMaker and AWS Glue for more information.
To use the Amazon Web Services Documentation, Javascript must be enabled. Please refer to your browser's Help pages for instructions.
Thanks for letting us know we're doing a good job!
If you've got a moment, please tell us what we did right so we can do more of it.
Thanks for letting us know this page needs work. We're sorry we let you down.
If you've got a moment, please tell us how we can make the documentation better.
Software Engineering Institute
Cite this post.
AMS Citation
Cois, C., 2015: DevOps Case Study: Amazon AWS. Carnegie Mellon University, Software Engineering Institute's Insights (blog), Accessed November 27, 2024, https://insights.sei.cmu.edu/blog/devops-case-study-amazon-aws/.
APA Citation
Cois, C. (2015, February 5). DevOps Case Study: Amazon AWS. Retrieved November 27, 2024, from https://insights.sei.cmu.edu/blog/devops-case-study-amazon-aws/.
Chicago Citation
Cois, C. Aaron. "DevOps Case Study: Amazon AWS." Carnegie Mellon University, Software Engineering Institute's Insights (blog) . Carnegie Mellon's Software Engineering Institute, February 5, 2015. https://insights.sei.cmu.edu/blog/devops-case-study-amazon-aws/.
IEEE Citation
C. Cois, "DevOps Case Study: Amazon AWS," Carnegie Mellon University, Software Engineering Institute's Insights (blog) . Carnegie Mellon's Software Engineering Institute, 5-Feb-2015 [Online]. Available: https://insights.sei.cmu.edu/blog/devops-case-study-amazon-aws/. [Accessed: 27-Nov-2024].
BibTeX Code
@misc{cois_2015, author={Cois, C. Aaron}, title={DevOps Case Study: Amazon AWS}, month={{Feb}, year={{2015}, howpublished={Carnegie Mellon University, Software Engineering Institute's Insights (blog)}, url={https://insights.sei.cmu.edu/blog/devops-case-study-amazon-aws/}, note={Accessed: 2024-Nov-27} }
DevOps Case Study: Amazon AWS
C. Aaron Cois
February 5, 2015, published in.
Regular readers of this blog will recognize a recurring theme in this series: DevOps is fundamentally about reinforcing desired quality attributes through carefully constructed organizational process, communication, and workflow . When teaching software engineering to graduate students in Carnegie Mellon University's Heinz College , I often spend time discussing well known tech companies and their techniques for managing software engineering and sustainment. These discussions serve as valuable real-world examples for software engineering approaches and associated outcomes, and can serve as excellent case studies for DevOps practitioners. This posting will discuss one of my favorite real-world DevOps case studies: Amazon .
Amazon is one of the most prolific tech companies today. Amazon transformed itself in 2006 from an online retailer to a tech giant and pioneer in the cloud space with the release of Amazon Web Services (AWS) , a widely used on-demand Infrastructure as a Service (IaaS) offering. Amazon accepted a lot of risk with AWS. By developing one of the first massive public cloud services, they accepted that many of the challenges would be unknown, and many of the solutions unproven. To learn from Amazon's success we need to ask the right questions. What steps did Amazon take to minimize this inherently risky venture? How did Amazon engineers define their process to ensure quality?
Luckily, some insight into these questions was made available when Google engineer Steve Yegge (a former Amazon engineer) accidentally made public an internal memo outlining his impression of Google's failings (and Amazon's successes) at platform engineering. This memo (which Yegge has specifically allowed to remain online) outlines a specific decision that illustrates CEO Jeff Bezos's understanding of the underlying tenets of what we now call DevOps, as well as his dedication to what I will claim are the primary quality attributes of the AWS platform: interoperability, availability, reliability, and security. According to Yegge, Jeff Bezos issued a mandate during the early development of the AWS platform, that stated, in Yegge's words :
- All teams will henceforth expose their data and functionality through service interfaces.
- Teams must communicate with each other through these interfaces.
- There will be no other form of interprocess communication allowed: no direct linking, no direct reads of another team's data store, no shared-memory model, no back-doors whatsoever. The only communication allowed is via service interface calls over the network.
- It doesn't matter what technology they use. HTTP, Corba, Pubsub, custom protocols -- doesn't matter. Bezos doesn't care.
- All service interfaces, without exception, must be designed from the ground up to be externalizable. That is to say, the team must plan and design to be able to expose the interface to developers in the outside world. No exceptions.
- Anyone who doesn't do this will be fired.
Aside from the harsh presentation, take note of what is being done here. Engineering processes are being changed; that is, engineers at Amazon now must develop web service APIs to share all data internally across the entire organization. This change is specifically designed to incentivize engineers to build for the desired level of quality. Teams will be required to build usable APIs, or they will receive complaints from other teams needing to access their data. Availability and reliability will be enforced in the same fashion. As more completely unrelated teams need to share data, APIs will be secured as a means of protecting data, reducing resource usage, auditing, and restricting access from untrusted internal clients. Keep in mind that this mandate was to all teams, not just development teams. Marketing wants some data you have collected on user statistics from the web site? Then marketing has to find a developer and use your API. You can quickly see how this created a wide array of users, use cases, user types, and scenarios of use for every team exposing any data within Amazon.
DevOps teaches us to create a process that enforces our desired quality attributes, such as requiring automated deployment of our software to succeed before the continuous integration build can be considered successful. In effect, this scenario from Amazon is an authoritarian version of DevOps thinking. By enforcing a rigorous requirement of eating (and serving!) their own dog food to all teams within Amazon, Bezos's engineering operation ensures that through constant and rigorous use, their APIs would become mature, robust, and hardened.
These API improvements happened organically at Amazon, without the need to issue micromanaging commands such as "All APIs within Amazon must introduce rate limit X and scale to Y concurrent requests," because teams were incentivized to continually improve their APIs to make their own working lives easier. When AWS was released a few years later, many of these same APIs comprised the public interface of the AWS platform, which was remarkably comprehensive and stable at release. This level of quality at release directly served business goals by contributing to the early adoption rates and steady increase in popularity of AWS, a platform that provided users with a comprehensive suite of powerful capabilities and immediate comfort and confidence in a stable, mature service.
Every two weeks, the SEI will publish a new blog post offering guidelines and practical advice to organizations seeking to adopt DevOps in practice. We welcome your feedback on this series, as well as suggestions for future content. Please leave feedback in the comments section below.
Additional Resources
To listen to the podcast, DevOps--Transform Development and Operations for Fast, Secure Deployments , featuring Gene Kim and Julia Allen, please visit https://resources.sei.cmu.edu/library/asset-view.cfm?assetid=58525 .
Digital Library Publications
Send a message, more by the author, devops case study: netflix and the chaos monkey, april 30, 2015 ⢠by c. aaron cois, continuous integration in devops, april 8, 2015 ⢠by c. aaron cois, january 26, 2015 ⢠by c. aaron cois, devops and your organization: where to begin, december 18, 2014 ⢠by c. aaron cois, devops and agile, november 13, 2014 ⢠by c. aaron cois, more in devsecops, acquisition archetypes seen in the wild, devsecops edition: cross-program dependencies, september 3, 2024 ⢠by william e. novak, cultivating kubernetes on the edge, july 8, 2024 ⢠by patrick earl , jeffrey hamed , doug reynolds , jose a. morales, polar: improving devsecops observability, may 6, 2024 ⢠by morgan farrah , vaughn coates , patrick earl, example case: using devsecops to redefine minimum viable product, march 11, 2024 ⢠by joe yankel, acquisition archetypes seen in the wild, devsecops edition: clinging to the old ways, december 18, 2023 ⢠by william e. novak, get updates on our latest work..
Sign up to have the latest post sent to your inbox weekly.
Each week, our researchers write about the latest in software engineering, cybersecurity and artificial intelligence. Sign up to get the latest post sent to your inbox the day it's published.
How Amazon grew an awkward side project into AWS, a behemoth thatâs now 4 times bigger than its original shopping business
If you have touched your phone or computer today, you very likely have been touched by a vast business few outside the technology world are aware of. Maybe youâve checked the Wall Street Journal or MarketWatch, traded a stock on Robinhood, bet on a football game through DraftKings , or posted on Pinterest or Yelp; ordered treats for Fido on Chewy or treats for yourself on DoorDash; submitted an expense report on Workday or made plans for the evening via Tinder, OkCupid, or Hinge.Â
If so, you did it with the help of Amazon Web Services. The less glamorous sibling to Amazonâs operations in e-commerce, streaming video, and smart devices, AWS is no less ubiquitous, deploying millions of computers worldwide, humming away somewhere in the cloud.
For all those AWS customers the on-demand cloud computing platform isnât just another vendor. They rely on it so heavily that it resembles a public utilityâtaken for granted, but essential to keep the machinery humming. In the past 12 months each of the companies mentioned above has stated in Securities and Exchange Commission filings that they âwould be adversely impactedâ if they lost their AWS service. Hundreds more companiesâ Netflix , Zoom , Intuit , Caesars Entertainmentâhave reported the same risk factor to the SEC in the past year. By the way, the SEC uses AWS. (So does Fortune .)
And those are but the tiniest fraction of AWS customers. AWSâinitially run by Andy Jassy, who went on to succeed Jeff Bezos as Amazonâs CEOâwonât say how many customers it has, only that it provides computing power, data storage, and software to millions of organizations and individuals. Now, even as Amazon lays off a reported 10,000 workers, Wall Street analysts expect another blowout performance from its web services division. Thatâs probably why few if any of those staffing cuts will affect this relatively recession-proof part of Bezosâs empire. (Amazon wonât say how many of its 1.5Â million employees work for AWS.)
For years AWS has brought in more profit than all other divisions of Amazon combined, usually by a wide margin. AWSâs operating profit last year, $18.5Â billion, was nearly three times the operating profit reported by the rest of the company ($6.3 billion). AWS pulled in $58.7Â billion of revenue in this yearâs first nine months; if it were independent, it would easily rank in the Fortune 100. Â
How did this offshoot of an online retailer come to rule the lucrative cloud-computing industry, towering over tech giants such as Microsoft and Google , which might have seemed better positioned to dominate?
AWSâs ascent is so unlikely that it demands an explanation. It reveals the power of a truly iconoclastic culture that, while at times ruthless, ultimately breeds innovation and preserves top talent by encouraging entrepreneurship.
The best-known origin story of AWS is that it started when Amazon had some spare computer capacity and decided to rent it out to other companies. That story wonât die, but it isnât true. The real story traces a circuitous path that could easily have ended in a ditch. Itâs grounded in a philosophy that still guides AWSâs progress.
âTo me, itâs the concept of insurgents versus incumbents,â says Adam Selipsky, who became AWSâs CEO last year when his predecessor, Jassy, took over as Amazon CEO. Selipsky, 56, speaks quietly, conveying an understated intensity. âOne thing that I think is really important, that we intentionally worry about all the time,â he says, is that âwe continue to keep the customer need dancing in front of our eyes at all times.â
The real story of the AWS insurgency began with Amazonâs innovative responses to two problems. First: By the early 2000s, Amazonâstill known mainly as an online booksellerâhad built from scratch one of the worldâs biggest websites, but adding new features had become frustratingly slow. Software engineering teams were spending 70% of their time building the basic elements any project would requireâmost important, a storage system and an appropriate computing infrastructure. Building those elements for projects at Amazon scale was hard, and all that work merely produced a foundation on which to build the cool new customer-pleasing features Amazon was seeking. Every project team was performing the same drudgery. Bezos and other Amazon managers started calling it âundifferentiated heavy liftingâ and complaining that it produced âmuck.âÂ
In response, Selipsky recalls, company leaders began to think, âLetâs build a shared layer of infrastructure services that all these teams can rely on, and none of them have to spend time on general capabilities like storage, compute capabilities, databases.â Amazonâs leaders didnât think of it as an internal âcloudââthe term wasnât widely used in the tech world yetâbut thatâs what it was.
The second problem involved other websites wanting to add links to Amazon products on their own pages. For example, a website about cooking might recommend a kitchen scale and include a link to the Amazon.com page for the product. Amazon was all for it, and would send them a bit of code they could plug into their site; if someone bought the product through the link, the site owner earned a fee. But as the program grew, cranking out bits of code for every affiliate site became overwhelming, and those affiliatesâ website developers wanted to create their own links and product displays instead of the ones Amazon sent them. So in 2002 Amazon offered them a more advanced piece of software, enabling them to create far more creative displays. The new software was complicated. Users had to write software rather than just plug it in. Yet thousands of developers loved it immediately.
When Amazon launched a fuller, free version of the software building block a few months later, it enabled anyone, not just affiliates, to incorporate Amazon features into their sites. The surprise: A lot of the downloads were going to Amazonâs own software engineers. The building block turned out to be a proof of concept for the labor-lightening innovations that Amazon itself was looking for.
A picture was emerging. Amazon desperately needed to free its software developers from creating muck. Developers everywhere, not only its own, were starving for new tools that did just that. âWe very quickly figured out that external developers had exactly the same problems as internal developers at Amazon,â Selipsky says.
But was that a business for Amazon? During a 2003 offsite at Bezosâs house, the companyâs top managers decided that it could be. That decision was the turning point, especially significant because it could so easily have gone the other way. Amazonâs customers were consumers who bought ânew, used, refurbished, and collectible items,â as the company told investors at the time. Why would anyone imagine this company could build a business selling technology services to software developers?
The decision to plunge ahead revealed a subtle distinction that outsiders didnât understand. The world saw Amazon as an online retailer, but the companyâs leaders never thought of it that way. They thought of it as âa technology company that had simply applied its technology to the retail space first,â Jassy later told Harvard Business School professors who were writing a case study. For that kind of company, AWS looked like a promising bet.
Coming out of the 2003 offsite, Jassyâs job was to build a team and develop AWS. He wrote a proposal for it as a cloud-computing business. The document, one of the famous six-pagers used at Amazonâs executive meetings instead of PowerPoint (which is banned), reportedly went through 31 revisions.
It took three years before AWS went live. In 2005 Jassy hired Selipsky from a software firm to run marketing, sales, and support, Selipsky recalls: âAmazon called and told me there was this initiative for something about turning the guts of Amazon inside out, but other companies could use it.â AWSâs first service, for data storage, âwas such a novel concept that it was even hard to explain and hard for me to understand,â he says.
Wall Street didnât get it. âI have yet to see how these investments are producing any profit,â a Piper Jaffray analyst said in 2006. âTheyâre probably more of a distraction than anything else.â
The rest of the world didnât get it either. âI cannot tell you the number of times I got asked, with a quizzical look on peopleâs faces, âBut what does this have to do with selling books?âââ Selipsky recalls. âThe answer, of course, was: AWS has nothing to do with selling books. But the technology we use to sell books has everything to do with AWS and what we can offer customers.â Those customers were software developers, an entirely new target market that baffled outsiders.
AWS was prepared for that reaction. One of Amazonâs principles reads in part: âAs we do new things, we accept that we may be misunderstood for long periods of time.â
On the day in March 2006 when AWS finally launched its inaugural serviceâS3, for Simple Storage ServiceâSelipsky was at a trade show in Santa Clara, Calif., âin a windowless, internet-less conference room,â as he describes it, unable to learn how the launch was going. At dayâs end he and a colleague ran outside to call Seattle for news.Â
âWe were told that 12,000 developers had signed up,â he says, a note of marvel still in his voice. âOn the first day. It was just amazing.â
Five months later AWS launched its other foundational service, EC2, for Elastic Compute Cloud, which was also instantly popular. The revolution had begun. Instead of raising millions of dollars to buy servers and build data centers, startups could now get online with a credit card, and pay a monthly bill for just the computing power and storage they used. If their new app was a hit, they could immediately engage all the cloud services that they needed. If it bombed, they werenât stuck with rooms of junk equipment. As a Silicon Valley entrepreneur and early AWS customer told Wired in 2008: âInfrastructure is the big guysâ most powerful asset. This levels the field.â
In response to that historic shift, AWSâs potential competitors didââŚânothing. âA business miracle happened,â Bezos told a conference years later. âThis is the greatest piece of business luck in the history of business so far as I know. We faced no like-minded competition for seven years. I think the big established enterprise software companies did not see Amazon as a credible enterprise software company, so we had this incredible runway.â
Selipsky suspects an additional motivation: âThey either didnât believe this could be a real business, or they were so threatened by what it would do to their own business models, and the way they were overcharging customers, that they didnât want to believe it.â
No one, not even at Amazon, foresaw how massive a business cloud computing would be, or AWSâs dominance in the space. To understand how this happened, itâs worth examining the companyâs guiding principles.
Eye-roll alert: Every company has principles, missions, visions, values; the vast majority are indistinguishable and sound as if they were written by committees, which they probably were. Some of Amazonâs leadership principles, as theyâre calledâthere are 16âsound that way, until they get a little âpeculiar,â to use a favorite Amazonian word.
For example, principle No. 11 begins, âEarn trust.â Leaders, it explains, âare vocally self-critical, even when doing so is awkward or embarrassing. Leaders do not believe their or their teamâs body odor smells of perfume.â This peculiarity is a badge of pride at Amazon; its web page for job seekers even says that its use of the principles âis just one of the things that makes Amazon peculiar.â
Not every Amazonian observes every principle all the time; in a company of 1.5Â million employees, thatâs not realistic. But Amazonâs batting average is high.
To answer the basic question of why a retailer would even think of creating AWS, consider principle No. 1, seemingly the hoariest of them all: âCustomer obsession.â Amazon sees itself as a tech company and sees the world as 8 billion potential customers. Thatâs one reason AWS made sense for a bookseller.
âI cannot tell you the number of times I got asked ⌠âBut what does this have to do with selling books?ââ Adam Selipsky, CEO, Amazon Web Services
Amazon allows new projects lots of time, as with AWS, in part to make sure decisions are based on data. An unusual principle states that leaders âwork to disconfirm their beliefs.â Groupthink is comforting, contagious, and dangerous. Being able to invoke one of the principles enables doubters to speak up.Â
âWe have senior engineers who will stop a meeting and say, âWeâve got to disconfirm our beliefsâweâre going too far here without checking,âââ says Mai-Lan Tomsen Bukovec, who oversees AWSâs storage services. âThatâs actually kind of revolutionary in terms of corporate culture.â
Itâs not a culture for everyone. Amazon is a famously demanding place to work, and there are plenty of stories of employees who found it to be too much. Media reports have criticized Amazonâs treatment of workers, and the company is battling unionization efforts at some of its e-commerce warehouses. Itâs noteworthy that last year Amazon added a new leadership principle: âStrive to be Earthâs best employer.â
âItâs not good for our business and not good for our customers if we turn out great employees and burn them out, and they leave after a couple of years,â says Matt Garman, an early AWS employee who now oversees sales and marketing. âSometimes there are people who donât like the culture, donât like those leadership principles. Itâs not a good fit for them. People like the culture or they donât like the culture, and I think thatâs okay. But we want people here for the long term.â
Asked to describe AWSâs strategy, Tomsen Bukovec says, âThatâs not a word we use a ton.âÂ
The foundation of conventional strategy, the subject of hundreds of books and articles, is understanding a companyâs industry and competitors. That approach gets us nowhere with Amazon. What industry is it in? No one industry encompasses selling dog food and selling computing power.Â
So does Amazon even have a strategy? âYes,â says Ram Charan, an adviser to CEOs and boards, and coauthor of a book on Amazonâs management system. But âitâs not a competitive strategy,â he says. âItâs a customer strategy.âÂ
Thatâs a mind bender. Business is competition, and business strategy is inherently competitive strategy. Except that at Amazon it isnât. If it had beenâif Amazon had been conventionally competitor-focusedâAWS probably wouldnât exist.Â
Colin Bryar, a former Amazon executive, says heâs often asked what Amazon is going to build next. Can it repeat what it did with AWS, create an out-of-the-blue business, unexpected and underestimated, in which it becomes dominant? âThatâs not the first question Amazon asks,â Bryar says. âThey ask, âWhatâs the next big customer problem we can go try to solve?âââ
The word âbigâ is key. At Amazonâs sizeâanalysts expect revenue exceeding $500Â billion for 2022âsmall problems are simply not of interest. When company leaders identify a sufficiently big problem, they must then conclude that Amazon can solve it, and that customers will adopt the solution. Those are not easy or quick questions to answer.
Cloud computing will grow 20% annually through 2026, far faster than any other segment of infotech, according to the Gartner tech consulting firm. Itâs no longer just smaller companies and startups who donât want to invest in their own server systems. Many AWS customers are increasing their spend, and some âspend literally hundreds of millions of dollars per year on AWS,â says Gartner analyst Raj Bala, who sees the contracts. âIâm not shocked anymore to see a $200Â million annual commitment, which is astonishing.â
Yet AWSâs dominance of the market will likely diminish even as its revenue grows. With a 44% share of the market, AWS has 20 points over Microsoftâs 24%âbut that lead is shrinking, says Bala. âIn the next five, six, seven years, that gap is going to be very, very narrow, if not equal.â Thatâs because âa lot of late adopter enterprises are coming to market,â he says, âand a lot of these folks will gravitate to Microsoft because theyâve got an existing contractual relationship with Microsoft.â
The narrowing gap with Microsoft is probably inevitable. AWSâs great challenge for the future is to maintain the discipline that made it a global colossus.
Losing that discipline is insidiously easy. Jim Collins, author of Good to Great , which identifies the factors shared by the worldâs most successful companies, has also written an analysis of failure, How the Mighty Fall . Winners invariably maintain discipline, and loss of discipline is always an element of decline. One of the principle threats? Attempts to control workers by overregulating them. âBureaucracy subverts discipline,â he tells Fortune.Â
When a company is growing as fast as AWS, it can be tempting to weaken hiring standards. âAs you grow, you start to bring in some of the wrong people,â he says, speaking of companies generally. âIf they donât get the intensity of being there, they shouldnât be there, but if enough of them stay, you try to control them with bureaucracy. Then the right people get out, which creates a cycle.â
With success and growth come further threats to discipline. When a business is riding high, âeasy cash erodes cost discipline, and that discipline is hard to recover once you lose it,â he says. Expansion brings risks, too: Responding desperately to deteriorating performance, the business bets on âundisciplined discontinuous leapsââacquisitions or expansions for which it isnât ready.Â
At the top of its game, bigger and stronger than any competitor, AWS must now meet an enviable challenge but a challenge nonetheless: the curse of success. Its most crucial task is to maintain the unwavering rigorâthe disciplineâof its principles and processes.
Selipsky seems to understand the need. Asked to define his job, he is silent for several seconds. Then, quietly but emphatically, he says his job âis to ensure that the positive, productive, useful elements of what got us to this stageâthat we hold those dear, and we safeguard them, and we donât let them slip away. We donât become incumbents.ââ
Amazonâs next big, thorny problem to solve
What might be the next industry to get Amazonâs AWS-style mega-venture treatment? The leading candidate is health care.Â
In 2018 Amazon bought PillPack, an online pharmacy, and last summer it paid $3.9 billion for One Medical, a membership-based primary-care provider operating across the U.S., saying in its announcement that âwe think health care is high on the list of experiences that need reinvention.âÂ
No one would disagree. For a company that seeks big problems to solve, this may be the biggest opportunity of all. Health care is the largest sector of the U.S. economy, and the industry is growing fast worldwide.Â
Data is the problem at the heart of health careâs inefficiency and unfathomable, wearisome customer experiencesâand itâs possible that it could be the solution.
That data is staggering in quantity and mostly unstructuredâhandwritten notes and X-ray and lab reports, sometimes of life-and-death importanceâin an industry that is the last bastion of fax machines.Â
Itâs a particularly attractive conundrum to Amazon because of the companyâs dominance of cloud computing. AWS is already deeply entrenched in the industry, used by hospitals, pharma companies, equipment makers, insurers, pharmacy benefit managers, the Centers for Medicare and Medicaid Services, and more.Â
Another potential advantage is Amazonâs massive international workforce and its enormous health care needs and expenses. Just as Amazon developed AWS by observing its own software needs and seeing them mirrored elsewhere, its own challenges as a growing corporate behemoth now may point the way to a new market opportunity.
This article appears in the December 2022/January 2023 issue of Fortune with the headline, âHow Amazonâs cloud took the world by storm.â
Our new weekly Impact Report newsletter will examine how ESG news and trends are shaping the roles and responsibilities of today's executivesâand how they can best navigate those challenges. Subscribe here .
Case studies of AWS serverless apps in production
Are you considering building a new serverless app on AWS? Or maybe migrating an existing workload to a serverless architecture?
Then you probably have a lot of exploratory research ahead of you. Reading tutorials and technical docs on how Lambda and other services work is important and you might even decide to put together a proof of concept app to understand what itâs like to develop on.
But to really get a feel for what it will be like running your app in production, you need to get the warts-and-all lowdown from people whoâve already navigated this path rather than naively jumping in head-first.
Yes Amazon themselves provide a âCustomer Successâ website dedicated to case studies but this is effectively just a marketing exercise with no real insight into the more technical challenges that the customer encountered. Nice fluff for business execs but pretty useless to engineers.
To help with this, Iâve curated the following list of articles from across the web written by organisations who built their production workloads with real users on a serverless AWS architecture. The articles describe the problems they hit along the way, the solutions they arrived at and an overall summary of the impact on their organisation.
The Case Studies
Iâve ranked each article using a Pain Index đ¤ â a score out of 5 based on how much the author goes into detail on the pains they encountered using serverless techs and tools.
How Ipdata Serves 25M API Calls From 10 Infinitely Scalable Global Endpoints For $150 A Month
Our (not so smooth) journey productising a serverless app on aws, how i built a serverless web crawler to mine vancouver real estate data at scale, lessons learnedâ ââ a year of going âfully serverlessâ in production, building a serverless, on-demand data science solution, going serverless: from common lisp and cgi to aws lambda and api gateway, how i scaled my static website to a global market for a fraction of the cost on aws, accelerating cross platform development with serverless microservices, serverless monitoring and notification pipeline, a case study for serverless integration: customizing opsgenieâs zendesk integration with aws lambda, how locize leverages serverless, making mot a live service, running an entire company on serverless â cloudspoilt, serverlessâ â âthe future of software architecture, serverless case study - coca-cola.
Have your own serverless story? I will be evolving this list regularly, so if youâve built a production serverless app on AWS and wrote about it somewhere, just drop me a message with a link to your article and Iâll happily add it.
Other articles you might enjoy:
- Concerns that go away in a serverless world
- How to deploy a high availability web app to AWS ECS
- How to calculate the billing savings of moving an EC2 app to Lambda
Free Email Course
How to transition your team to a serverless-first mindset.
In this 5-day email course, youâll learn:
- Lesson 1: Why serverless is inevitable
- Lesson 2: How to identify a candidate project for your first serverless application
- Lesson 3: How to compose the building blocks that AWS provides
- Lesson 4: Common mistakes to avoid when building your first serverless application
- Lesson 5: How to break ground on your first serverless project
𩺠Architecture & Process Review
Built a serverless app on AWS, but struggling with performance, maintainability, scalability or DevOps practices?
I can help by reviewing your codebase, architecture and delivery processes to identify risk areas and their causes. I will then recommend solutions and help you with their implementation.
Learn more >>
𪲠Testing Audit
Are bugs in production slowing you down and killing confidence in your product?
Get a tailored plan of action for overhauling your AWS serverless appâs tests and empower your team to ship faster with confidence.
10 Important Cloud Migration Case Studies You Need to Know
Aug 1, 2019 | Engineering
For most businesses considering cloud migration, the move is filled with promise and potential. Scalability, flexibility, reliability, cost-effectiveness, improved performance and disaster recovery, and simpler, faster deployment â whatâs not to like?Â
Itâs important to understand that cloud platform benefits come alongside considerable challenges, including the need to improve availability and latency, auto-scale orchestration, manage tricky connections, scale the development process effectively, and address cloud security challenges. While advancements in virtualization and containerization (e.g., Docker, Kubernetes) are helping many businesses solve these challenges, cloud migration is no simple matter.
Thatâs why, when considering your organizationâs cloud migration strategy, itâs beneficial to look at case studies and examples from other companiesâ cloud migration experiences. Why did they do it? How did they go about it? What happened? What benefits did they see, and what are the advantages and disadvantages of cloud computing for these businesses? Most importantly, what lessons did they learn â and what can you learn from them?
With that in mind, Distillery has put together 10 cloud migration case studies your business can learn from. While most of the case studies feature companies moving from on-premise, bare metal data centers to cloud, we also look at companies moving from cloud to cloud, cloud to multi-cloud, and even off the cloud. Armed with all these lessons, ideas, and strategies, youâll feel readier than ever to make the cloud work for your business.
Challenges for Cloud Adoption: Is Your Organization Ready to Scale and Be Cloud-first?
We examine several of these case studies from a more technical perspective in our white paper on Top Challenges for Cloud Adoption in 2019 . In this white paper, youâll learn:
- Why cloud platform development created scaling challenges for businesses
- How scaling fits into the big picture of the Cloud Maturity Framework
- Why advancements in virtualization and containerization have helped businesses solve these scaling challenges
- How companies like Betabrand, Shopify, Spotify, Evernote, Waze, and others have solved these scaling challenges while continuing to innovate their businesses and provide value to users
#1 Betabrand : Bare Metal to Cloud
Betabrand (est. 2005) is a crowd-funded, crowd-sourced retail clothing e-commerce company that designs, manufactures, and releases limited-quantity products via its website.
Migration Objective
The company struggled with the maintenance difficulties and lack of scalability of the bare metal infrastructure supporting their operations.
Planning for and adding capacity took too much time and added costs. They also needed the ability to better handle website traffic surges.
Migration Strategy and Results
In anticipation of 2017âs Black Friday increased web traffic, Betabrand migrated to a Google Cloud infrastructure managed by Kubernetes (Google Kubernetes Engine, or GKE). They experienced no issues related to the migration, and Black Friday 2017 was a success.
By Black Friday 2018, early load testing and auto-scaling cloud infrastructure helped them to handle peak loads with zero issues. The company hasnât experienced a single outage since migrating to the cloud.
Key Takeaways
- With advance planning, cloud migration can be a simple process. Betabrandâs 2017 on-premise to cloud migration proved smooth and simple. In advance of actual migration, they created multiple clusters in GKE and performed several test migrations, thereby identifying the right steps for a successful launch.
- Cloud streamlines load testing. Betabrand was able to quickly create a replica of its production services that they could use in load testing. Tests revealed poorly performing code paths that would only be revealed by heavy loads. They were able to fix the issues before Black Friday.
- Cloudâs scalability is key to customer satisfaction. As a fast-growing e-commerce business, Betabrand realized they couldnât afford the downtime or delays of bare metal. Their cloud infrastructure scales automatically, helping them avoid issues and keep customers happy. This factor alone underlines the strategic importance of cloud computing in business organizations like Betabrand.
#2 Shopify : Cloud to Cloud
Shopify (est. 2006) provides a proprietary e-commerce software platform upon which businesses can build and run online stores and retail point-of-sale (POS) systems.
Shopify wanted to ensure they were using the best tools possible to support the evolution needed to meet increasing customer demand. Though theyâd always been a cloud-based organization, building and running their e-commerce cloud with their own data centers, they sought to capitalize on the container-based cloud benefits of immutable infrastructure to provide better support to their customers. Specifically, they wanted to ensure predictable, repeatable builds and deployments; simpler and more robust rollbacks; and elimination of configuration management drift.
By building out their cloud with Google, building a âShop Moverâ database migration tool, and leveraging Docker containers and Kubernetes, Shopify has been able to transform its data center to better support customersâ online shops, meeting all their objectives. For Shopify customers, the increasingly scalable, resilient applications mean improved consistency, reliability, and version control.
- Immutable infrastructure vastly improves deployments. Since cloud servers are never modified post-deployment, configuration drift â in which undocumented changes to servers can cause them to diverge from one another and from the originally deployed configuration â is minimized or eliminated. This means deployments are easier, simpler, and more consistent.
- Scalability is central to meeting the changing needs of dynamic e-commerce businesses. Shopify is home to online shops like Kylie Cosmetics, which hosts flash sales that can sell out in 20 seconds. Shopifyâs cloud-to-cloud migration helped its servers flex to meet fluctuating demand, ensuring that commerce isnât slowed or disrupted.
#3 Spotify: Bare Metal to Cloud
Spotify (est. 2006) is a media services provider primarily focused on its audio-streaming platform, which lets users search for, listen to, and share music and podcasts.
Spotifyâs leadership and engineering team agreed: The companyâs massive in-house data centers were difficult to provision and maintain, and they didnât directly serve the companyâs goal of being the âbest music service in the world.â They wanted to free up Spotifyâs engineers to focus on innovation. They started planning for migration to Google Cloud Platform (GCP) in 2015, hoping to minimize disruption to product development, and minimize the cost and complexity of hybrid operation.
Spotify invested two years pre-migration in preparing, assigning a dedicated Spotify/Google cloud migration team to oversee the effort. Ultimately, they split the effort into two parts, services and data, which took a year apiece. For services migration, engineering teams moved services to the cloud in focused two-week sprints, pausing on product development. For data migration, teams were allowed to choose between âforkliftingâ or rewriting options to best fit their needs. Ultimately, Spotifyâs on-premise to cloud migration succeeded in increasing scalability while freeing up developers to innovate.
- Gaining stakeholder buy-in is crucial. Spotify was careful to consult its engineers about the vision. Once they could see what their jobs looked like in the future, they were all-in advocates.
- Migration preparation shouldnât be rushed. Spotifyâs dedicated migration team took the time to investigate various cloud strategies and build out the use case demonstrating the benefits of cloud computing to the business. They carefully mapped all dependencies. They also worked with Google to identify and orchestrate the right cloud strategies and solutions.
- Focus and dedication pay huge dividends. Spotifyâs dedicated migration team kept everything on track and in focus, making sure everyone involved was aware of past experience and lessons already learned. In addition, since engineering teams were fully focused on the migration effort, they were able to complete it more quickly, reducing the disruption to product development.
#4 Evernote : Bare Metal to Cloud
Evernote (est. 2008) is a collaborative, cross-platform note-taking and task management application that helps users capture, organize, and track ideas, tasks, and deadlines.
Evernote, which had maintained its own servers and network since inception, was feeling increasingly limited by its infrastructure. It was difficult to scale, and time-consuming and expensive to maintain. They wanted more flexibility, as well as to improve Evernoteâs speed, reliability, security, and disaster recovery planning. To minimize service disruption, they hoped to conduct the on-premise to cloud migration as efficiently as possible.
Starting in 2016, Evernote used an iterative approach : They built a strawman based on strategic decisions, tested its viability, and rapidly iterated. They then settled on a cloud migration strategy that used a phased cutover approach, enabling them to test parts of the migration before committing. They also added important levels of security by using GCP service accounts , achieving âencryption at rest,â and improving disaster recovery processes. Evernote successfully migrated 5 billion notes and 5 billion attachments to GCP in only 70 days.
- Cloud migration doesnât have to happen all at once. You can migrate services in phases or waves grouped by service or user. Evernoteâs phased cutover approach allowed for rollback points if things werenât going to according to plan, reducing migration risk.
- Ensuring data security in the cloud may require extra steps. Cloud security challenges may require extra focus in your cloud migration effort. Evernote worked with Google to create the additional security layers their business required. GCP service accounts can be customized and configured to use built-in public/private key pairs managed and rotated daily by Google.
- Cloud capabilities can improve disaster recovery planning. Evernote wanted to ensure that they would be better prepared to quickly recover customer data in the event of a disaster. Cloudâs reliable, redundant, and robust data backups help make this possible.
#5 Etsy : Bare Metal to Cloud
Etsy (est. 2005) is a global e-commerce platform that allows sellers to build and run online stores selling handmade and vintage items and crafting supplies.
Etsy had maintained its own infrastructure from inception. In 2018, they decided to re-evaluate whether cloud was right for the companyâs future. In particular, they sought to improve site performance, engineering efficiency, and UX. They also wanted to ensure long-term scalability and sustainability, as well as to spend less time maintaining infrastructure and more time executing strategy.
Migration Strategy and Results
Etsy undertook a detailed vendor selection process , ultimately identifying GCP as the right choice for their cloud migration strategy . Since theyâd already been running their own Kubernetes cluster inside their data center, they already had a partial solution for deploying to GKE. They initially deployed in a hybrid environment (private data center and GKE), providing redundancy, reducing risk, and allowing them to perform A/B testing. Theyâre on target to complete the migration and achieve all objectives.
Key Takeaways
- Business needs and technology fit should be periodically reassessed. While bare metal was the right choice for Etsy when it launched in 2005, improvements in infrastructure as a service (IaaS) and platform as a service (PaaS) made cloud migration the right choice in 2018.
- Detailed analysis can help businesses identify the right cloud solution for their needs. Etsy took a highly strategic approach to assessment that included requirements definition, RACI (responsible, accountable, consulted, informed) matrices, and architectural reviews. This helped them ensure that their cloud migration solution would genuinely help them achieve all their goals.
- Hybrid deployment can be effective for reducing cloud migration risk. Dual deployment on their private data center and GKE was an important aspect of Etsyâs cloud migration strategy.
#6 Waze : Cloud to Multi-cloud
Waze (est. 2006; acquired by Google in 2013) is a GPS-enabled navigation application that uses real-time user location data and user-submitted reports to suggest optimized routes.
Though Waze moved to the cloud very early on, their fast growth quickly led to production issues that caused painful rollbacks, bottlenecks, and other complications. They needed to find a way to get faster feedback to users while mitigating or eliminating their production issues.
Waze decided to run an active-active architecture across multiple cloud providers â GCP and Amazon Web Services (AWS) â to improve the resiliency of their production systems. This means theyâre better-positioned to survive a DNS DDOS attack, or a regional or global failure. An open source continuous delivery platform called Spinnaker helps them deploy software changes while making rollbacks easy and reliable. Spinnaker makes it easy for Wazeâs engineers to deploy across both cloud platforms, using a consistent conceptual model that doesnât rely on detailed knowledge of either platform .
- Some business models may be a better fit for multiple clouds. Cloud strategies are not one-size-fits-all. Wazeâs stability and reliability depends on avoiding downtime, deploying quick fixes to bugs, and ensuring the resiliency of their production systems. Running on two clouds at once helps make it all happen.
- Your engineers donât necessarily have to be cloud experts to deploy effectively. Spinnaker streamlines multi-cloud deployment for Waze such that developers can focus on development, rather than on becoming cloud experts.
Deploying software more frequently doesnât have to mean reduced stability/reliability. Continuous delivery can get you to market faster, improving quality while reducing risk and cost.
#7 AdvancedMD : Bare Metal to Cloud
AdvancedMD (est. 1999) is a software platform used by medical professionals to manage their practices, securely share information, and manage workflow, billing, and other tasks.
AdvancedMD was being spun off from its parent company, ADP; to operate independently, it had to move all its data out of ADPâs data center. Since they handle highly sensitive, protected patient data that must remain available to practitioners at a momentâs notice, security and availability were top priorities. They sought an affordable, easy-to-manage, and easy-to-deploy solution that would scale to fit their customersâ changing needs while keeping patient data secure and available.
AdvancedMDâs on-premise to cloud migration would avoid the need to hire in-house storage experts, save them and their customers money, ensure availability, and let them quickly flex capacity to accommodate fluctuating needs. It also offered the simplicity and security they needed. Since AdvancedMD was already running NetApp storage arrays in its data center, it was easy to use NetAppâs Cloud Volumes ONTAP to move their data to AWS. ONTAP also provides the enterprise-level data protection and encryption they require.
- Again, ensuring data security in the cloud may require extra steps. Though cloud has improved or mitigated some security concerns (e.g., vulnerable OS dependencies, long-lived compromised servers), hackers have turned their focus to the vulnerabilities that remain. Thus, your cloud migration strategy may need extra layers of controls (e.g., permissions, policies, encryption) to address these cloud security challenges.
- When service costs are a concern, cloudâs flexibility may help. AdvancedMD customers are small to mid-sized budget-conscious businesses. Since cloud auto-scales, AdvancedMD never pays for more cloud infrastructure than theyâre actually using. That helps them keep customer pricing affordable.
#8 Dropbox : Cloud to Hybrid
Dropbox (est. 2007) is a file hosting service that provides cloud storage and file synchronization solutions for customers.
Dropbox had developed its business by using the cloud â specifically, Amazon S3 (Simple Storage Service) â to house data while keeping metadata housed on-premise. Over time, they began to fear theyâd become overly dependent on Amazon: not only were costs increasing as their storage needs grew, but Amazon was also planning a similar service offering, Amazon WorkDocs. Dropbox decided to take back their storage to help them reduce costs, increase control, and maintain their competitive edge.
While the task of moving all that data to an in-house infrastructure was daunting, the company decided it was worth it â at least in the US (Dropbox assessed that in Europe, AWS is still the best fit). Dropbox designed in-house and built a massive network of new-breed machines orchestrated by software built with an entirely new programming language, moving about 90% of its files back to its own servers . Dropboxâs expanded in-house capabilities have enabled them to offer Project Infinite, which provides desktop users with universal compatibility and unlimited real-time data access.
- On-premise infrastructure may still be right for some businesses. Since Dropboxâs core product relies on fast, reliable data access and storage, they need to ensure consistently high performance at a sustainable cost. Going in-house required a huge investment, but improved performance and reduced costs may serve them better in the long run. Once Dropbox understood that big picture, they had to recalculate the strategic importance of cloud computing to their organization.
- Size matters. As Wired lays out in its article detailing the move , cloud businesses are not charities. Thereâs always going to be margin somewhere. If a business is big enough â like Dropbox â it may make sense to take on the difficulties of building a massive in-house network. But itâs a huge risk for businesses that arenât big enough, or whose growth may stall.
#9 GitLab : Cloud to Cloud
GitLab (est. 2011) is an open core company that provides a single application supporting the entire DevOps life cycle for more than 100,000 organizations.
GitLabâs core application enables software development teams to collaborate on projects in real time, avoiding both handoffs and delays. GitLab wanted to improve performance and reliability, accelerating development while making it as seamless, efficient, and error-free as possible. While they acknowledged that Microsoft Azure had been a great cloud provider, they strongly believed that GCPâs Kubernetes was the future, calling it âa technology that makes reliability at massive scale possible.â
In 2018, GitLab migrated from Azure to GCP so that GitLab could run as a cloud-native application on GKE. They used their own Geo product to migrate the data, initially mirroring the data between Azure and GCP. Post-migration, GitLab reported improved performance (including fewer latency spikes) and a 61% improvement in availability.
- Containers are seen by many as the future of DevOps. GitLab was explicit that they view Kubernetes as the future. Indeed, containers provide notable benefits, including a smaller footprint, predictability, and the ability to scale up and down in real time. For GitLabâs users, the companyâs cloud-to-cloud migration makes it easier to get started with using Kubernetes for DevOps.
- Improved stability and availability can be a big benefit of cloud migration. In GitLabâs case, mean-time between outage events pre-migration was 1.3 days. Excluding the first day post-migration, theyâre up to 12 days between outage events. Pre-migration, they averaged 32 minutes of downtime weekly; post-migration, theyâre down to 5.
#10 Cordant Group : Bare Metal to Hybrid
The Cordant Group (est. 1957) is a global social enterprise that provides a range of services and solutions, including recruitment, security, cleaning, health care, and technical electrical.
Over the years, the Cordant Group had grown tremendously, requiring an extensive IT infrastructure to support their vast range of services. While theyâd previously focused on capital expenses, theyâd shifted to looking at OpEx, or operational expenses â which meant cloudâs âpay as you goâ model made increasing sense. It was also crucial to ensure ease of use and robust data backups.
They began by moving to a virtual private cloud on AWS , but found that the restriction to use Windows DFS for file server resource management was creating access problems. NetApp Cloud ONTAP, a software storage appliance that runs on AWS server and storage resources, solved the issue. File and storage management is easier than ever, and backups are robust, which means that important data restores quickly. The solution also monitors resource costs over time, enabling more accurate planning that drives additional cost savings.
- Business and user needs drive cloud needs. Thatâs why cloud strategies will absolutely vary based on a companyâs unique needs. The Cordant Group needed to revisit its cloud computing strategy when users were unable to quickly access the files they needed. In addition, with such a diverse user group, ease of use had to be a top priority.
- Cloud ROI ultimately depends on how your business measures ROI. The strategic importance of cloud computing in business organizations is specific to each organization. Cloud became the right answer for the Cordant Group when OpEx became the companyâs dominant lens.
Which Cloud Migration Strategy Is Right for You?
As these 10 diverse case studies show, cloud strategies are not one-size-fits all. Choosing the right cloud migration strategy for your business depends on several factors, including your:
- Goals. What business results do you want to achieve as a result of the migration? How does your business measure ROI? What problems are you trying to solve via your cloud migration strategy?
- Business model. What is your current state? What are your core products/services and user needs, and how are they impacted by how and where data is stored? What are your development and deployment needs, issues, and constraints? What are your organizationâs cost drivers? How is your business impacted by lack of stability or availability? Can you afford downtime?
- Security needs. What are your requirements regarding data privacy, confidentiality, encryption, identity and access management, and regulatory compliance? Which cloud security challenges pose potential problems for your business?
- Scaling needs. Do your needs and usage fluctuate? Do you expect to grow or shrink?
- Disaster recovery and business continuity needs. What are your needs and capabilities in this area? How might your business be impacted in the event of a major disaster â or even a minor service interruption?
- Technical expertise. What expertise do you need to run and innovate your core business? What expertise do you have in-house? Are you allocating your in-house expertise to the right efforts?
- Team focus and capacity. How much time and focus can your team dedicate to the cloud migration effort?
- Timeline. What business needs constrain your timeline? What core business activities must remain uninterrupted? How much time can you allow for planning and testing your cloud migration strategy?
Of course, this list isnât exhaustive. These questions are only a starting point. But getting started â with planning, better understanding your goals and drivers, and assessing potential technology fit â is the most important step of any cloud migration process. We hope these 10 case studies have helped to get you thinking in the right direction.
While the challenges of cloud migration are considerable, the right guidance, planning, and tools can lead you to the cloud strategies and solutions that will work best for your business. So donât delay: Take that first step to helping your business reap the potential advantages and benefits of cloud computing.
Ready to take the next step on your cloud journey? As a Certified Google Cloud Technology Partner , Distillery is here to help. Download our white paper on top challenges for cloud adoption to get tactical and strategic about using cloud to transform your business. Â
Recent Posts
- Mastering Customer Engagement for Loyalty and Success
- The Double-Edged Sword: Risks and Rewards for Data Engineers in GenAI Projects
- The Untapped Benefits and Best Practices of Outsourcing Product Management
- FinTech Evolution: How Technology is Revolutionizing Financial Services
- Top UX/UI Challenges on Fintech Websites (and How to Fix Them)
IMAGES
COMMENTS
aws.amazon. They selected improvements to create an AWS cloud migration case study cloud platform to safely transfer their data from a managed service provider to AWS during the early phases of a worldwide pandemic.. Challenge. Early in 2020, COVID-19 was discovered, and telemedicine services were used to lessen the strain on hospital infrastructure.
AWS serverless portfolio offers the flexibility to create a wide array of applications, including those requiring assurance programs such as PCI or HIPAA compliance. ... Case studies . Companies have applied serverless architectures to use cases from stock trade validation to e-commerce website construction to natural language processing. AWS ...
the Amazon Web Services (AWS) Cloud. Andrew Brookes, the company's chief technology officer, says, "We created a tool for enterprises, and they trusted AWS because it is the biggest and most successful cloud provider." Faculty is a provider of data science, machine learning, and artificial intelligence. The company's data science
Manchester Airport Group is using AWS's technology to improve passenger experience through a revamp of its data strategy, as set out by the organisation's chief digital officer, Ryan Cant
AWS case studies are highly valuable resources that provide real-life examples of various companies utilizing AWS services to improve operations and efficiency. These case studies serve as a testament to the effectiveness and versatility of AWS in solving complex business challenges. By showcasing best practices and implementations in action ...
Regular readers of this blog will recognize a recurring theme in this series: DevOps is fundamentally about reinforcing desired quality attributes through carefully constructed organizational process, communication, and workflow.When teaching software engineering to graduate students in Carnegie Mellon University's Heinz College, I often spend time discussing well known tech companies and ...
Over two decades, AWS has become the ultimate case study in corporate innovation. BY Geoff Colvin. Illustration by Jamie Cullen. If you have touched your phone or computer today, you very likely ...
The Case Studies I've ranked each article using a Pain Index đ¤ â a score out of 5 based on how much the author goes into detail on the pains they encountered using serverless techs and tools.
While most of the case studies feature companies moving from on-premise, bare metal data centers to cloud, we also look at companies moving from cloud to cloud, ... NetApp Cloud ONTAP, a software storage appliance that runs on AWS server and storage resources, solved the issue. File and storage management is easier than ever, and backups are ...
Case study: Migration to AWS. ... In my case, some limitations related to software components had to be considered and at the end of the day, it is a tradeoff between security and flexibility. ... AWS WAF Security Automations CloudFormation Template Management and Reporting.