• Digital Marketing
  • Facebook Marketing
  • Instagram Marketing
  • Ecommerce Marketing
  • Content Marketing
  • Data Science Certification
  • Machine Learning
  • Artificial Intelligence
  • Data Analytics
  • Graphic Design
  • Adobe Illustrator
  • Web Designing
  • UX UI Design
  • Interior Design
  • Front End Development
  • Back End Development Courses
  • Business Analytics
  • Entrepreneurship
  • Supply Chain
  • Financial Modeling
  • Corporate Finance
  • Project Finance
  • Harvard University
  • Stanford University
  • Yale University
  • Princeton University
  • Duke University
  • UC Berkeley
  • Harvard University Executive Programs
  • MIT Executive Programs
  • Stanford University Executive Programs
  • Oxford University Executive Programs
  • Cambridge University Executive Programs
  • Yale University Executive Programs
  • Kellog Executive Programs
  • CMU Executive Programs
  • 45000+ Free Courses
  • Free Certification Courses
  • Free DigitalDefynd Certificate
  • Free Harvard University Courses
  • Free MIT Courses
  • Free Excel Courses
  • Free Google Courses
  • Free Finance Courses
  • Free Coding Courses
  • Free Digital Marketing Courses

10 Financial Analytics Case Studies [2024]

Financial analytics merges the precision of data science with the strategic depth of financial theory, creating an indispensable toolkit for navigating the complexities of the modern business landscape. This field utilizes sophisticated data analysis techniques alongside financial insights to bolster strategic decision-making, enhance financial performance, and influence policy formulation. Its broad applicability spans a multitude of activities, including advanced risk management practices, nuanced investment analysis, and the optimization of financial strategies, playing a pivotal role in guiding companies through the intricacies of the financial markets.

The discussion presents ten illustrative case studies that spotlight the significant impact of financial analytics across various industries. These examples reveal how entities ranging from burgeoning startups to established corporate giants have leveraged analytical methodologies to address pressing challenges, capitalize on emerging opportunities, and propel their strategic goals. Through this exploration, we aim to shed light on the practical deployment of financial analytics, underscoring its potential to not only resolve complex dilemmas but also to drive innovation, streamline operations, and foster sustainable growth. Through the lens of these narratives, financial analytics is revealed as a cornerstone of competitive advantage and organizational resilience, demonstrating its critical role in enabling businesses to maneuver adeptly through the evolving financial terrain.

10 Financial Analytics Case Studies

1. risk management in banking sector: jpmorgan chase & co..

JPMorgan Chase & Co. has harnessed the power of big data analytics and machine learning to revolutionize its approach to risk management. The bank’s use of advanced algorithms enables the analysis of vast datasets, identifying subtle patterns of fraudulent activities and potential credit risk that would be impossible for human analysts to detect. This capability is powered by AI technologies that learn from data over time, improving their predictive accuracy with each transaction analyzed.

Furthermore, JPMorgan employs predictive analytics to forecast future financial risks, allowing for preemptive measures to be taken. The bank has also developed sophisticated simulation models that can assess the potential impact of various market scenarios on its portfolio, enhancing its stress testing processes. These technological advancements have not only bolstered the bank’s resilience against financial uncertainties but have also led to a more dynamic and responsive risk management strategy. The adoption of these technologies has yielded significant benefits, including reduced operational costs, minimized losses from fraud, and an overall improvement in financial health and stability.

Related: How Can AI Be Used in Financial Analytics?

2. Portfolio Optimization for an Investment Firm: BlackRock

BlackRock’s proprietary platform, Aladdin, stands as a testament to the integration of cutting-edge technology in financial analytics for portfolio management. Aladdin’s comprehensive suite combines risk analytics, portfolio management, and trading tools into a single platform. This integration allows for real-time analysis and optimization of investment portfolios. The platform employs quantitative models that leverage historical and current market data to simulate various investment strategies, assessing their potential risks and returns.

Moreover, Aladdin utilizes machine learning to refine its predictive capabilities, enabling more accurate forecasting of market movements and asset performance. This allows BlackRock to tailor investment portfolios that are closely aligned with the client’s risk tolerance and financial goals, achieving optimal risk-adjusted returns. The use of such sophisticated analytics tools has empowered BlackRock to navigate complex markets more effectively, ensuring strategic asset allocation and informed decision-making. Clients benefit from enhanced portfolio performance, greater transparency in investment processes, and improved risk management.

3. Revenue Forecasting for a Retail Chain: Walmart

Walmart’s approach to revenue forecasting exemplifies the strategic use of data analytics and machine learning in retail. By analyzing a diverse array of data sources, including sales records, customer demographics, and buying patterns, Walmart applies sophisticated forecasting models that incorporate seasonal trends, promotional impacts, and economic indicators. This analytical rigor enables Walmart to make accurate predictions about future sales trends, which is essential for inventory management and marketing strategy formulation.

The retail giant’s investment in machine learning technologies further refines its forecasting models, allowing for adjustments in real time based on emerging data. This dynamic approach to forecasting supports Walmart in maintaining optimal inventory levels, reducing stockouts or overstock situations, and maximizing sales opportunities. Additionally, Walmart leverages these insights to tailor marketing efforts, enhancing customer engagement and satisfaction. The integration of these advanced technologies into Walmart’s operational framework has led to significant improvements in efficiency, cost savings, and overall financial performance, setting a benchmark for the retail industry.

Related: How Can CFO Use Financial Analytics?

4. Financial Analytics in Healthcare Cost Reduction: Kaiser Permanente

Kaiser Permanente utilizes a comprehensive approach to financial analytics, integrating predictive analytics, data visualization, and advanced statistical models to scrutinize patient care data, treatment outcomes, and operational costs comprehensively. This multifaceted analysis allows Kaiser to identify inefficiencies and areas where improvements can be made without compromising the quality of patient care. For instance, by employing predictive analytics, Kaiser can forecast patient admissions and manage staffing levels more efficiently, reducing unnecessary labor costs.

Data visualization tools are beneficial for conveying intricate data insights throughout an organization, enabling informed decision-making based on data. These technologies have enabled Kaiser Permanente to implement strategic cost-saving measures, such as optimizing supply chain logistics for medical supplies and reducing readmission rates through better patient care programs. The result is a dual achievement: maintaining high standards of patient care while significantly reducing operational costs, demonstrating the power of financial analytics in balancing cost efficiency with quality healthcare delivery.

5. Enhancing Customer Loyalty through Analytics: American Express

American Express’s strategy for enhancing customer loyalty involves a sophisticated analytics infrastructure that leverages big data, machine learning, and predictive analytics. The company analyzes vast datasets encompassing spending patterns, customer feedback, and engagement levels to gain deep insights into customer behavior and preferences. Machine learning models are then employed to personalize offerings and rewards, tailoring services to individual customer needs and expectations.

This personalized approach is made possible by American Express’s investment in AI and natural language processing (NLP) technologies, which enable the company to analyze unstructured data sources, such as customer feedback on social media and review platforms. The insights derived from these analyses inform targeted marketing campaigns and loyalty programs, fostering a sense of value and recognition among customers. This strategy has proven effective in strengthening customer relationships, enhancing satisfaction, and, ultimately, driving loyalty and retention in the competitive financial services market.

Related: Will AI Replace Financial Analysts?

6. Predictive Analytics in Credit Scoring: Kabbage

Kabbage’s innovative approach to credit scoring exemplifies the transformative potential of financial analytics in fintech. By leveraging machine learning algorithms and big data analytics, Kabbage analyzes a wide array of non-traditional data sources, including online sales, banking transactions, and social media activity, to assess the creditworthiness of small businesses. This data-driven approach allows Kabbage to generate more accurate and nuanced credit profiles, especially for businesses with limited credit histories or those traditionally underserved by conventional banks.

The technology stack employed by Kabbage includes advanced machine learning models that continuously learn and adapt based on new data, improving the accuracy of credit assessments over time. Furthermore, Kabbage utilizes natural language processing to analyze textual data from social media and other digital platforms, gaining insights into the business’s customer engagement and market presence. This comprehensive and inclusive approach to credit scoring has not only enabled Kabbage to expand access to credit for small businesses but has also streamlined the application and approval process, making it faster and more user-friendly.

7. Operational Efficiency through Process Analytics: Toyota

Toyota’s implementation of the Toyota Production System (TPS) is a benchmark in manufacturing excellence, deeply integrated with real-time data analysis and financial metrics to enhance operational efficiency. The TPS, known for its principles of Just-In-Time (JIT) production and continuous improvement (Kaizen), is further empowered by financial analytics to reduce waste and optimize production flow. Toyota employs advanced data analytics tools to monitor every aspect of the production process, from inventory levels to equipment efficiency, allowing for immediate adjustments that reduce downtime and material waste.

The integration of Internet of Things (IoT) technology into Toyota’s manufacturing processes allows for the collection of real-time data from machinery and equipment, enabling predictive maintenance and reducing unplanned outages. By correlating this operational data with financial performance, Toyota can directly measure the impact of process improvements on cost savings and productivity, ensuring that its manufacturing operations are not only efficient but also cost-effective. This holistic approach to operational excellence through data analytics has kept Toyota at the forefront of the automotive industry.

Related: Role of Data Analytics in FinTech?

8. Real Estate Investment Analysis: Zillow

Zillow leverages a sophisticated combination of financial analytics, machine learning, and big data to revolutionize real estate investment analysis. The platform’s Zestimate feature employs statistical and machine learning models to analyze millions of property listings, sales data, and regional market trends, providing an accurate estimate of a home’s market value. This technology enables investors and homebuyers to identify potential investment opportunities and assess property values with a high degree of accuracy.

Beyond Zestimate, Zillow uses geospatial analysis and predictive modeling to understand local real estate trends, demographic shifts, and economic indicators that could affect property values. This comprehensive analytical approach allows Zillow to offer a suite of tools and insights that empower users to make informed decisions in the real estate market. For investors, this means the ability to quickly identify undervalued properties, predict future market movements, and optimize investment portfolios according to changing market conditions.

9. Strategic Planning for a Tech Giant: Google

Google’s strategic planning and decision-making processes are deeply rooted in financial analytics, leveraging the company’s vast data resources and AI capabilities. Google uses predictive modeling and scenario analysis to forecast market trends, consumer behavior, and technological advancements. This enables the tech giant to identify emerging business opportunities, assess the viability of new products, and allocate resources effectively.

Google’s investment in cloud computing and AI technologies, such as TensorFlow for machine learning and BigQuery for data analytics, exemplifies its commitment to harnessing data for strategic advantage. These tools allow Google to process and analyze large datasets quickly, deriving insights that inform its innovation strategies and support data-driven decisions. By continuously analyzing financial metrics in conjunction with market data, Google can navigate market uncertainties, capitalize on new opportunities, and sustain its leadership in the tech industry.

Related: How to Become a Financial Analyst?

10. Enhancing Supply Chain Resilience: Procter & Gamble (P&G)

P&G’s approach to enhancing supply chain resilience is a prime example of financial analytics applied to operational challenges. The company utilizes digital twin technology, which creates a virtual model of the supply chain, enabling P&G to simulate various scenarios and predict the impact of disruptions. This predictive capability, combined with real-time analytics, allows P&G to anticipate supply chain vulnerabilities, optimize inventory management, and maintain product availability even in the face of unforeseen challenges.

P&G’s use of predictive analytics extends to demand forecasting, where machine learning models analyze sales data, market trends, and consumer behavior to predict future product demand accurately. This foresight enables the company to adjust production and distribution plans proactively, minimizing the risk of stockouts or excess inventory. The integration of these technologies into P&G’s supply chain strategy not only improves operational efficiency but also enhances the company’s ability to respond agilely to market changes, ensuring a competitive advantage in the fast-moving consumer goods industry.

These financial analytics case studies demonstrate the transformative power of financial analytics across diverse sectors, highlighting how the strategic integration of technologies such as artificial intelligence, machine learning, predictive analytics, and data visualization enables organizations to unearth valuable insights, streamline operations, and fulfill strategic objectives. As the domain of financial analytics advances, the adoption of these sophisticated technologies becomes imperative for businesses intent on navigating the intricacies of today’s financial landscape. This evolution not only fuels innovation but also secures a competitive advantage, ensuring that companies remain agile and forward-thinking in an era of unprecedented change.

  • How to Get a Refund for an Online Course You Enrolled in? [2024]
  • Should you hire a Fractional Marketing Director? [Pros and Cons] [2024]

Team DigitalDefynd

We help you find the best courses, certifications, and tutorials online. Hundreds of experts come together to handpick these recommendations based on decades of collective experience. So far we have served 4 Million+ satisfied learners and counting.

case study on financial data analysis

10 Ways to Train Your Finance Team [2024]

case study on financial data analysis

Venture Capital vs. Angel Investor: Key Differences [2024]

case study on financial data analysis

20 Reasons Why You Must Learn Fintech [2024]

case study on financial data analysis

20 Interesting FP&A Facts & Statistics [2024]

case study on financial data analysis

Top 10 Dressing Tips for Finance Leaders [2024]

case study on financial data analysis

Impact of Artificial Intelligence on the Finance Industry [2024]

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Financial analysis

  • Finance and investing
  • Corporate finance

Are You Paying Too Much for That Acquisition?

  • Robert G. Eccles
  • Kersten L. Lanes
  • Thomas C. Wilson
  • From the July–August 1999 Issue

case study on financial data analysis

What Would Happen if the U.S. Stopped Requiring Quarterly Earnings Reports?

  • Shivaram Rajgopal
  • September 06, 2018

Knowing What to Sell, When, and to Whom

  • Rajkumar Venkatesan
  • Werner Reinartz
  • From the March 2006 Issue

How Fast Should Your Company Grow?

  • William E. Fruhan, Jr.
  • From the January 1984 Issue

Managing as if Tomorrow Mattered

  • Robert H. Hayes
  • David A. Garvin
  • From the May 1982 Issue

case study on financial data analysis

Innovation Killers: How Financial Tools Destroy Your Capacity to Do New Things

  • Clayton M. Christensen
  • Stephen P. Kaufman
  • Willy C. Shih
  • From the January 2008 Issue

case study on financial data analysis

Putting the Balanced Scorecard to Work

  • Robert S. Kaplan
  • David P. Norton
  • From the September–October 1993 Issue

Get the Right Mix of Bricks and Clicks

  • Ranjay Gulati
  • Jason Garino
  • From the May–June 2000 Issue

case study on financial data analysis

Sustainable Business Went Mainstream in 2021

  • Andrew Winston
  • December 27, 2021

case study on financial data analysis

A Refresher on Current Ratio

  • September 14, 2015

case study on financial data analysis

Do You Know How Much Your Business Is Worth?

  • Reed Phillips
  • Charles Slack
  • September 14, 2022

case study on financial data analysis

How to Talk to Your CFO About Sustainability

  • Tensie Whelan
  • Elyse Douglas
  • From the January–February 2021 Issue

case study on financial data analysis

Get More Funding for Your R&D Initiatives

  • Christoph Loch
  • October 30, 2023

Valuation Matters

  • Alfred Rappaport
  • Michael J. Mauboussin
  • From the March 2002 Issue

case study on financial data analysis

Why Are Companies That Lose Money Still So Successful?

  • Vijay Govindarajan
  • Anup Srivastava
  • Aneel Iqbal
  • Elnaz Basirian
  • June 27, 2024

CEOs Don’t Care Enough About Capital Allocation

  • José Antonio Marco-Izquierdo
  • April 16, 2015

Putting the Service-Profit Chain to Work

  • James L. Heskett
  • Thomas O. Jones
  • Gary W. Loveman
  • W. Earl Sasser, Jr.
  • Leonard A. Schlesinger
  • From the March–April 1994 Issue

Strategic Analysis for More Profitable Acquisitions

  • From the July 1979 Issue

case study on financial data analysis

4 Ways Leaders Can Get More from Their Company's Innovation Efforts

  • Greg Satell
  • October 10, 2017

Value Acceleration: Lessons from Private-Equity Masters

  • Paul Rogers
  • Thomas P. Holland
  • From the June 2002 Issue

case study on financial data analysis

Corrections Corp. of America

  • Edward J. Riedl
  • April 12, 2007

Bodie Industrial Supply Inc.

  • Elizabeth M.A. Grasby
  • Julie Harvey
  • January 30, 2007

Cause and Effect: Performance Attribution in Commercial Real Estate

  • Craig Furfine
  • February 09, 2017

Reawakening the World's Most Famous Office Building: Economics behind a Groundbreaking Energy Efficiency Retrofit

  • Denise Akason
  • Helee Hillman
  • February 27, 2012

Bank Capital Structure: A Primer

  • Yiorgos Allayannis
  • Andrew Shapiro
  • October 03, 2008

Kangaroo Tail Winery Limited (A)

  • Graeme Rankine
  • August 01, 2017

California Choppers

  • Dan Thompson
  • September 23, 2009

Hanson Industries (B)

  • William J. Bruns Jr.
  • Julie H. Hertenstein
  • November 01, 1978

Identify the Industries--1996

  • Sharon M. McKinnon
  • Jeremy Cott
  • July 16, 1997

Apple, Einhorn, and iPrefs

  • Carliss Y. Baldwin
  • Hanoch Feit
  • Edward A. Minasian
  • Brandon Van Buren
  • December 03, 2014

Longbow Capital Partners

  • Malcolm P. Baker
  • Samuel G. Hanson
  • James Weber
  • February 12, 2015

eBay, Inc.: Stock Option Plans (B)

  • Mark T. Bradshaw
  • January 02, 2003

Financial Policy at Apple, 2013 (A)

  • Mihir A. Desai
  • Elizabeth A. Meyer
  • June 19, 2014

Becton Dickinson: Designing the New Strategic, Operational, and Financial Planning Process

  • Robert Simons
  • Antonio Davila
  • Afroze Mohammed
  • July 12, 1996

Chemblog, A.G. (C)

  • Natalia Cuguero`-Escofet
  • Josep Maria Rosanas
  • March 18, 2014

Measuring Interim Period Performance

  • David F. Hawkins
  • December 06, 1999

Zorbas Bakeries (Cyprus): An Option to Expand?

  • Olga Kandinskaia
  • April 01, 2019

Peter Wendell

  • William A. Sahlman
  • Helen M. Soussou
  • September 12, 1985

Developing Financial Insights: Using a Future Value (FV) and a Present Value (PV) Approach

  • Mark E. Haskins
  • June 01, 2011

Horseshoe Resort

  • May 30, 2011

Popular Topics

Partner center.

10 Real World Data Science Case Studies Projects with Example

Top 10 Data Science Case Studies Projects with Examples and Solutions in Python to inspire your data science learning in 2023.

10 Real World Data Science Case Studies Projects with Example

BelData science has been a trending buzzword in recent times. With wide applications in various sectors like healthcare , education, retail, transportation, media, and banking -data science applications are at the core of pretty much every industry out there. The possibilities are endless: analysis of frauds in the finance sector or the personalization of recommendations on eCommerce businesses.  We have developed ten exciting data science case studies to explain how data science is leveraged across various industries to make smarter decisions and develop innovative personalized products tailored to specific customers.

data_science_project

Walmart Sales Forecasting Data Science Project

Downloadable solution code | Explanatory videos | Tech Support

Table of Contents

Data science case studies in retail , data science case study examples in entertainment industry , data analytics case study examples in travel industry , case studies for data analytics in social media , real world data science projects in healthcare, data analytics case studies in oil and gas, what is a case study in data science, how do you prepare a data science case study, 10 most interesting data science case studies with examples.

data science case studies

So, without much ado, let's get started with data science business case studies !

With humble beginnings as a simple discount retailer, today, Walmart operates in 10,500 stores and clubs in 24 countries and eCommerce websites, employing around 2.2 million people around the globe. For the fiscal year ended January 31, 2021, Walmart's total revenue was $559 billion showing a growth of $35 billion with the expansion of the eCommerce sector. Walmart is a data-driven company that works on the principle of 'Everyday low cost' for its consumers. To achieve this goal, they heavily depend on the advances of their data science and analytics department for research and development, also known as Walmart Labs. Walmart is home to the world's largest private cloud, which can manage 2.5 petabytes of data every hour! To analyze this humongous amount of data, Walmart has created 'Data Café,' a state-of-the-art analytics hub located within its Bentonville, Arkansas headquarters. The Walmart Labs team heavily invests in building and managing technologies like cloud, data, DevOps , infrastructure, and security.

ProjectPro Free Projects on Big Data and Data Science

Walmart is experiencing massive digital growth as the world's largest retailer . Walmart has been leveraging Big data and advances in data science to build solutions to enhance, optimize and customize the shopping experience and serve their customers in a better way. At Walmart Labs, data scientists are focused on creating data-driven solutions that power the efficiency and effectiveness of complex supply chain management processes. Here are some of the applications of data science  at Walmart:

i) Personalized Customer Shopping Experience

Walmart analyses customer preferences and shopping patterns to optimize the stocking and displaying of merchandise in their stores. Analysis of Big data also helps them understand new item sales, make decisions on discontinuing products, and the performance of brands.

ii) Order Sourcing and On-Time Delivery Promise

Millions of customers view items on Walmart.com, and Walmart provides each customer a real-time estimated delivery date for the items purchased. Walmart runs a backend algorithm that estimates this based on the distance between the customer and the fulfillment center, inventory levels, and shipping methods available. The supply chain management system determines the optimum fulfillment center based on distance and inventory levels for every order. It also has to decide on the shipping method to minimize transportation costs while meeting the promised delivery date.

Here's what valued users are saying about ProjectPro

user profile

Gautam Vermani

Data Consultant at Confidential

user profile

Anand Kumpatla

Sr Data Scientist @ Doubleslash Software Solutions Pvt Ltd

Not sure what you are looking for?

iii) Packing Optimization 

Also known as Box recommendation is a daily occurrence in the shipping of items in retail and eCommerce business. When items of an order or multiple orders for the same customer are ready for packing, Walmart has developed a recommender system that picks the best-sized box which holds all the ordered items with the least in-box space wastage within a fixed amount of time. This Bin Packing problem is a classic NP-Hard problem familiar to data scientists .

Whenever items of an order or multiple orders placed by the same customer are picked from the shelf and are ready for packing, the box recommendation system determines the best-sized box to hold all the ordered items with a minimum of in-box space wasted. This problem is known as the Bin Packing Problem, another classic NP-Hard problem familiar to data scientists.

Here is a link to a sales prediction data science case study to help you understand the applications of Data Science in the real world. Walmart Sales Forecasting Project uses historical sales data for 45 Walmart stores located in different regions. Each store contains many departments, and you must build a model to project the sales for each department in each store. This data science case study aims to create a predictive model to predict the sales of each product. You can also try your hands-on Inventory Demand Forecasting Data Science Project to develop a machine learning model to forecast inventory demand accurately based on historical sales data.

Get Closer To Your Dream of Becoming a Data Scientist with 70+ Solved End-to-End ML Projects

Amazon is an American multinational technology-based company based in Seattle, USA. It started as an online bookseller, but today it focuses on eCommerce, cloud computing , digital streaming, and artificial intelligence . It hosts an estimate of 1,000,000,000 gigabytes of data across more than 1,400,000 servers. Through its constant innovation in data science and big data Amazon is always ahead in understanding its customers. Here are a few data analytics case study examples at Amazon:

i) Recommendation Systems

Data science models help amazon understand the customers' needs and recommend them to them before the customer searches for a product; this model uses collaborative filtering. Amazon uses 152 million customer purchases data to help users to decide on products to be purchased. The company generates 35% of its annual sales using the Recommendation based systems (RBS) method.

Here is a Recommender System Project to help you build a recommendation system using collaborative filtering. 

ii) Retail Price Optimization

Amazon product prices are optimized based on a predictive model that determines the best price so that the users do not refuse to buy it based on price. The model carefully determines the optimal prices considering the customers' likelihood of purchasing the product and thinks the price will affect the customers' future buying patterns. Price for a product is determined according to your activity on the website, competitors' pricing, product availability, item preferences, order history, expected profit margin, and other factors.

Check Out this Retail Price Optimization Project to build a Dynamic Pricing Model.

iii) Fraud Detection

Being a significant eCommerce business, Amazon remains at high risk of retail fraud. As a preemptive measure, the company collects historical and real-time data for every order. It uses Machine learning algorithms to find transactions with a higher probability of being fraudulent. This proactive measure has helped the company restrict clients with an excessive number of returns of products.

You can look at this Credit Card Fraud Detection Project to implement a fraud detection model to classify fraudulent credit card transactions.

New Projects

Let us explore data analytics case study examples in the entertainment indusry.

Ace Your Next Job Interview with Mock Interviews from Experts to Improve Your Skills and Boost Confidence!

Data Science Interview Preparation

Netflix started as a DVD rental service in 1997 and then has expanded into the streaming business. Headquartered in Los Gatos, California, Netflix is the largest content streaming company in the world. Currently, Netflix has over 208 million paid subscribers worldwide, and with thousands of smart devices which are presently streaming supported, Netflix has around 3 billion hours watched every month. The secret to this massive growth and popularity of Netflix is its advanced use of data analytics and recommendation systems to provide personalized and relevant content recommendations to its users. The data is collected over 100 billion events every day. Here are a few examples of data analysis case studies applied at Netflix :

i) Personalized Recommendation System

Netflix uses over 1300 recommendation clusters based on consumer viewing preferences to provide a personalized experience. Some of the data that Netflix collects from its users include Viewing time, platform searches for keywords, Metadata related to content abandonment, such as content pause time, rewind, rewatched. Using this data, Netflix can predict what a viewer is likely to watch and give a personalized watchlist to a user. Some of the algorithms used by the Netflix recommendation system are Personalized video Ranking, Trending now ranker, and the Continue watching now ranker.

ii) Content Development using Data Analytics

Netflix uses data science to analyze the behavior and patterns of its user to recognize themes and categories that the masses prefer to watch. This data is used to produce shows like The umbrella academy, and Orange Is the New Black, and the Queen's Gambit. These shows seem like a huge risk but are significantly based on data analytics using parameters, which assured Netflix that they would succeed with its audience. Data analytics is helping Netflix come up with content that their viewers want to watch even before they know they want to watch it.

iii) Marketing Analytics for Campaigns

Netflix uses data analytics to find the right time to launch shows and ad campaigns to have maximum impact on the target audience. Marketing analytics helps come up with different trailers and thumbnails for other groups of viewers. For example, the House of Cards Season 5 trailer with a giant American flag was launched during the American presidential elections, as it would resonate well with the audience.

Here is a Customer Segmentation Project using association rule mining to understand the primary grouping of customers based on various parameters.

Get FREE Access to Machine Learning Example Codes for Data Cleaning , Data Munging, and Data Visualization

In a world where Purchasing music is a thing of the past and streaming music is a current trend, Spotify has emerged as one of the most popular streaming platforms. With 320 million monthly users, around 4 billion playlists, and approximately 2 million podcasts, Spotify leads the pack among well-known streaming platforms like Apple Music, Wynk, Songza, amazon music, etc. The success of Spotify has mainly depended on data analytics. By analyzing massive volumes of listener data, Spotify provides real-time and personalized services to its listeners. Most of Spotify's revenue comes from paid premium subscriptions. Here are some of the examples of case study on data analytics used by Spotify to provide enhanced services to its listeners:

i) Personalization of Content using Recommendation Systems

Spotify uses Bart or Bayesian Additive Regression Trees to generate music recommendations to its listeners in real-time. Bart ignores any song a user listens to for less than 30 seconds. The model is retrained every day to provide updated recommendations. A new Patent granted to Spotify for an AI application is used to identify a user's musical tastes based on audio signals, gender, age, accent to make better music recommendations.

Spotify creates daily playlists for its listeners, based on the taste profiles called 'Daily Mixes,' which have songs the user has added to their playlists or created by the artists that the user has included in their playlists. It also includes new artists and songs that the user might be unfamiliar with but might improve the playlist. Similar to it is the weekly 'Release Radar' playlists that have newly released artists' songs that the listener follows or has liked before.

ii) Targetted marketing through Customer Segmentation

With user data for enhancing personalized song recommendations, Spotify uses this massive dataset for targeted ad campaigns and personalized service recommendations for its users. Spotify uses ML models to analyze the listener's behavior and group them based on music preferences, age, gender, ethnicity, etc. These insights help them create ad campaigns for a specific target audience. One of their well-known ad campaigns was the meme-inspired ads for potential target customers, which was a huge success globally.

iii) CNN's for Classification of Songs and Audio Tracks

Spotify builds audio models to evaluate the songs and tracks, which helps develop better playlists and recommendations for its users. These allow Spotify to filter new tracks based on their lyrics and rhythms and recommend them to users like similar tracks ( collaborative filtering). Spotify also uses NLP ( Natural language processing) to scan articles and blogs to analyze the words used to describe songs and artists. These analytical insights can help group and identify similar artists and songs and leverage them to build playlists.

Here is a Music Recommender System Project for you to start learning. We have listed another music recommendations dataset for you to use for your projects: Dataset1 . You can use this dataset of Spotify metadata to classify songs based on artists, mood, liveliness. Plot histograms, heatmaps to get a better understanding of the dataset. Use classification algorithms like logistic regression, SVM, and Principal component analysis to generate valuable insights from the dataset.

Explore Categories

Below you will find case studies for data analytics in the travel and tourism industry.

Airbnb was born in 2007 in San Francisco and has since grown to 4 million Hosts and 5.6 million listings worldwide who have welcomed more than 1 billion guest arrivals in almost every country across the globe. Airbnb is active in every country on the planet except for Iran, Sudan, Syria, and North Korea. That is around 97.95% of the world. Using data as a voice of their customers, Airbnb uses the large volume of customer reviews, host inputs to understand trends across communities, rate user experiences, and uses these analytics to make informed decisions to build a better business model. The data scientists at Airbnb are developing exciting new solutions to boost the business and find the best mapping for its customers and hosts. Airbnb data servers serve approximately 10 million requests a day and process around one million search queries. Data is the voice of customers at AirBnB and offers personalized services by creating a perfect match between the guests and hosts for a supreme customer experience. 

i) Recommendation Systems and Search Ranking Algorithms

Airbnb helps people find 'local experiences' in a place with the help of search algorithms that make searches and listings precise. Airbnb uses a 'listing quality score' to find homes based on the proximity to the searched location and uses previous guest reviews. Airbnb uses deep neural networks to build models that take the guest's earlier stays into account and area information to find a perfect match. The search algorithms are optimized based on guest and host preferences, rankings, pricing, and availability to understand users’ needs and provide the best match possible.

ii) Natural Language Processing for Review Analysis

Airbnb characterizes data as the voice of its customers. The customer and host reviews give a direct insight into the experience. The star ratings alone cannot be an excellent way to understand it quantitatively. Hence Airbnb uses natural language processing to understand reviews and the sentiments behind them. The NLP models are developed using Convolutional neural networks .

Practice this Sentiment Analysis Project for analyzing product reviews to understand the basic concepts of natural language processing.

iii) Smart Pricing using Predictive Analytics

The Airbnb hosts community uses the service as a supplementary income. The vacation homes and guest houses rented to customers provide for rising local community earnings as Airbnb guests stay 2.4 times longer and spend approximately 2.3 times the money compared to a hotel guest. The profits are a significant positive impact on the local neighborhood community. Airbnb uses predictive analytics to predict the prices of the listings and help the hosts set a competitive and optimal price. The overall profitability of the Airbnb host depends on factors like the time invested by the host and responsiveness to changing demands for different seasons. The factors that impact the real-time smart pricing are the location of the listing, proximity to transport options, season, and amenities available in the neighborhood of the listing.

Here is a Price Prediction Project to help you understand the concept of predictive analysis which is widely common in case studies for data analytics. 

Uber is the biggest global taxi service provider. As of December 2018, Uber has 91 million monthly active consumers and 3.8 million drivers. Uber completes 14 million trips each day. Uber uses data analytics and big data-driven technologies to optimize their business processes and provide enhanced customer service. The Data Science team at uber has been exploring futuristic technologies to provide better service constantly. Machine learning and data analytics help Uber make data-driven decisions that enable benefits like ride-sharing, dynamic price surges, better customer support, and demand forecasting. Here are some of the real world data science projects used by uber:

i) Dynamic Pricing for Price Surges and Demand Forecasting

Uber prices change at peak hours based on demand. Uber uses surge pricing to encourage more cab drivers to sign up with the company, to meet the demand from the passengers. When the prices increase, the driver and the passenger are both informed about the surge in price. Uber uses a predictive model for price surging called the 'Geosurge' ( patented). It is based on the demand for the ride and the location.

ii) One-Click Chat

Uber has developed a Machine learning and natural language processing solution called one-click chat or OCC for coordination between drivers and users. This feature anticipates responses for commonly asked questions, making it easy for the drivers to respond to customer messages. Drivers can reply with the clock of just one button. One-Click chat is developed on Uber's machine learning platform Michelangelo to perform NLP on rider chat messages and generate appropriate responses to them.

iii) Customer Retention

Failure to meet the customer demand for cabs could lead to users opting for other services. Uber uses machine learning models to bridge this demand-supply gap. By using prediction models to predict the demand in any location, uber retains its customers. Uber also uses a tier-based reward system, which segments customers into different levels based on usage. The higher level the user achieves, the better are the perks. Uber also provides personalized destination suggestions based on the history of the user and their frequently traveled destinations.

You can take a look at this Python Chatbot Project and build a simple chatbot application to understand better the techniques used for natural language processing. You can also practice the working of a demand forecasting model with this project using time series analysis. You can look at this project which uses time series forecasting and clustering on a dataset containing geospatial data for forecasting customer demand for ola rides.

Explore More  Data Science and Machine Learning Projects for Practice. Fast-Track Your Career Transition with ProjectPro

7) LinkedIn 

LinkedIn is the largest professional social networking site with nearly 800 million members in more than 200 countries worldwide. Almost 40% of the users access LinkedIn daily, clocking around 1 billion interactions per month. The data science team at LinkedIn works with this massive pool of data to generate insights to build strategies, apply algorithms and statistical inferences to optimize engineering solutions, and help the company achieve its goals. Here are some of the real world data science projects at LinkedIn:

i) LinkedIn Recruiter Implement Search Algorithms and Recommendation Systems

LinkedIn Recruiter helps recruiters build and manage a talent pool to optimize the chances of hiring candidates successfully. This sophisticated product works on search and recommendation engines. The LinkedIn recruiter handles complex queries and filters on a constantly growing large dataset. The results delivered have to be relevant and specific. The initial search model was based on linear regression but was eventually upgraded to Gradient Boosted decision trees to include non-linear correlations in the dataset. In addition to these models, the LinkedIn recruiter also uses the Generalized Linear Mix model to improve the results of prediction problems to give personalized results.

ii) Recommendation Systems Personalized for News Feed

The LinkedIn news feed is the heart and soul of the professional community. A member's newsfeed is a place to discover conversations among connections, career news, posts, suggestions, photos, and videos. Every time a member visits LinkedIn, machine learning algorithms identify the best exchanges to be displayed on the feed by sorting through posts and ranking the most relevant results on top. The algorithms help LinkedIn understand member preferences and help provide personalized news feeds. The algorithms used include logistic regression, gradient boosted decision trees and neural networks for recommendation systems.

iii) CNN's to Detect Inappropriate Content

To provide a professional space where people can trust and express themselves professionally in a safe community has been a critical goal at LinkedIn. LinkedIn has heavily invested in building solutions to detect fake accounts and abusive behavior on their platform. Any form of spam, harassment, inappropriate content is immediately flagged and taken down. These can range from profanity to advertisements for illegal services. LinkedIn uses a Convolutional neural networks based machine learning model. This classifier trains on a training dataset containing accounts labeled as either "inappropriate" or "appropriate." The inappropriate list consists of accounts having content from "blocklisted" phrases or words and a small portion of manually reviewed accounts reported by the user community.

Here is a Text Classification Project to help you understand NLP basics for text classification. You can find a news recommendation system dataset to help you build a personalized news recommender system. You can also use this dataset to build a classifier using logistic regression, Naive Bayes, or Neural networks to classify toxic comments.

Get confident to build end-to-end projects

Access to a curated library of 250+ end-to-end industry projects with solution code, videos and tech support.

Pfizer is a multinational pharmaceutical company headquartered in New York, USA. One of the largest pharmaceutical companies globally known for developing a wide range of medicines and vaccines in disciplines like immunology, oncology, cardiology, and neurology. Pfizer became a household name in 2010 when it was the first to have a COVID-19 vaccine with FDA. In early November 2021, The CDC has approved the Pfizer vaccine for kids aged 5 to 11. Pfizer has been using machine learning and artificial intelligence to develop drugs and streamline trials, which played a massive role in developing and deploying the COVID-19 vaccine. Here are a few data analytics case studies by Pfizer :

i) Identifying Patients for Clinical Trials

Artificial intelligence and machine learning are used to streamline and optimize clinical trials to increase their efficiency. Natural language processing and exploratory data analysis of patient records can help identify suitable patients for clinical trials. These can help identify patients with distinct symptoms. These can help examine interactions of potential trial members' specific biomarkers, predict drug interactions and side effects which can help avoid complications. Pfizer's AI implementation helped rapidly identify signals within the noise of millions of data points across their 44,000-candidate COVID-19 clinical trial.

ii) Supply Chain and Manufacturing

Data science and machine learning techniques help pharmaceutical companies better forecast demand for vaccines and drugs and distribute them efficiently. Machine learning models can help identify efficient supply systems by automating and optimizing the production steps. These will help supply drugs customized to small pools of patients in specific gene pools. Pfizer uses Machine learning to predict the maintenance cost of equipment used. Predictive maintenance using AI is the next big step for Pharmaceutical companies to reduce costs.

iii) Drug Development

Computer simulations of proteins, and tests of their interactions, and yield analysis help researchers develop and test drugs more efficiently. In 2016 Watson Health and Pfizer announced a collaboration to utilize IBM Watson for Drug Discovery to help accelerate Pfizer's research in immuno-oncology, an approach to cancer treatment that uses the body's immune system to help fight cancer. Deep learning models have been used recently for bioactivity and synthesis prediction for drugs and vaccines in addition to molecular design. Deep learning has been a revolutionary technique for drug discovery as it factors everything from new applications of medications to possible toxic reactions which can save millions in drug trials.

You can create a Machine learning model to predict molecular activity to help design medicine using this dataset . You may build a CNN or a Deep neural network for this data analyst case study project.

Access Data Science and Machine Learning Project Code Examples

9) Shell Data Analyst Case Study Project

Shell is a global group of energy and petrochemical companies with over 80,000 employees in around 70 countries. Shell uses advanced technologies and innovations to help build a sustainable energy future. Shell is going through a significant transition as the world needs more and cleaner energy solutions to be a clean energy company by 2050. It requires substantial changes in the way in which energy is used. Digital technologies, including AI and Machine Learning, play an essential role in this transformation. These include efficient exploration and energy production, more reliable manufacturing, more nimble trading, and a personalized customer experience. Using AI in various phases of the organization will help achieve this goal and stay competitive in the market. Here are a few data analytics case studies in the petrochemical industry:

i) Precision Drilling

Shell is involved in the processing mining oil and gas supply, ranging from mining hydrocarbons to refining the fuel to retailing them to customers. Recently Shell has included reinforcement learning to control the drilling equipment used in mining. Reinforcement learning works on a reward-based system based on the outcome of the AI model. The algorithm is designed to guide the drills as they move through the surface, based on the historical data from drilling records. It includes information such as the size of drill bits, temperatures, pressures, and knowledge of the seismic activity. This model helps the human operator understand the environment better, leading to better and faster results will minor damage to machinery used. 

ii) Efficient Charging Terminals

Due to climate changes, governments have encouraged people to switch to electric vehicles to reduce carbon dioxide emissions. However, the lack of public charging terminals has deterred people from switching to electric cars. Shell uses AI to monitor and predict the demand for terminals to provide efficient supply. Multiple vehicles charging from a single terminal may create a considerable grid load, and predictions on demand can help make this process more efficient.

iii) Monitoring Service and Charging Stations

Another Shell initiative trialed in Thailand and Singapore is the use of computer vision cameras, which can think and understand to watch out for potentially hazardous activities like lighting cigarettes in the vicinity of the pumps while refueling. The model is built to process the content of the captured images and label and classify it. The algorithm can then alert the staff and hence reduce the risk of fires. You can further train the model to detect rash driving or thefts in the future.

Here is a project to help you understand multiclass image classification. You can use the Hourly Energy Consumption Dataset to build an energy consumption prediction model. You can use time series with XGBoost to develop your model.

10) Zomato Case Study on Data Analytics

Zomato was founded in 2010 and is currently one of the most well-known food tech companies. Zomato offers services like restaurant discovery, home delivery, online table reservation, online payments for dining, etc. Zomato partners with restaurants to provide tools to acquire more customers while also providing delivery services and easy procurement of ingredients and kitchen supplies. Currently, Zomato has over 2 lakh restaurant partners and around 1 lakh delivery partners. Zomato has closed over ten crore delivery orders as of date. Zomato uses ML and AI to boost their business growth, with the massive amount of data collected over the years from food orders and user consumption patterns. Here are a few examples of data analyst case study project developed by the data scientists at Zomato:

i) Personalized Recommendation System for Homepage

Zomato uses data analytics to create personalized homepages for its users. Zomato uses data science to provide order personalization, like giving recommendations to the customers for specific cuisines, locations, prices, brands, etc. Restaurant recommendations are made based on a customer's past purchases, browsing history, and what other similar customers in the vicinity are ordering. This personalized recommendation system has led to a 15% improvement in order conversions and click-through rates for Zomato. 

You can use the Restaurant Recommendation Dataset to build a restaurant recommendation system to predict what restaurants customers are most likely to order from, given the customer location, restaurant information, and customer order history.

ii) Analyzing Customer Sentiment

Zomato uses Natural language processing and Machine learning to understand customer sentiments using social media posts and customer reviews. These help the company gauge the inclination of its customer base towards the brand. Deep learning models analyze the sentiments of various brand mentions on social networking sites like Twitter, Instagram, Linked In, and Facebook. These analytics give insights to the company, which helps build the brand and understand the target audience.

iii) Predicting Food Preparation Time (FPT)

Food delivery time is an essential variable in the estimated delivery time of the order placed by the customer using Zomato. The food preparation time depends on numerous factors like the number of dishes ordered, time of the day, footfall in the restaurant, day of the week, etc. Accurate prediction of the food preparation time can help make a better prediction of the Estimated delivery time, which will help delivery partners less likely to breach it. Zomato uses a Bidirectional LSTM-based deep learning model that considers all these features and provides food preparation time for each order in real-time. 

Data scientists are companies' secret weapons when analyzing customer sentiments and behavior and leveraging it to drive conversion, loyalty, and profits. These 10 data science case studies projects with examples and solutions show you how various organizations use data science technologies to succeed and be at the top of their field! To summarize, Data Science has not only accelerated the performance of companies but has also made it possible to manage & sustain their performance with ease.

FAQs on Data Analysis Case Studies

A case study in data science is an in-depth analysis of a real-world problem using data-driven approaches. It involves collecting, cleaning, and analyzing data to extract insights and solve challenges, offering practical insights into how data science techniques can address complex issues across various industries.

To create a data science case study, identify a relevant problem, define objectives, and gather suitable data. Clean and preprocess data, perform exploratory data analysis, and apply appropriate algorithms for analysis. Summarize findings, visualize results, and provide actionable recommendations, showcasing the problem-solving potential of data science techniques.

Access Solved Big Data and Data Science Projects

About the Author

author profile

ProjectPro is the only online platform designed to help professionals gain practical, hands-on experience in big data, data engineering, data science, and machine learning related technologies. Having over 270+ reusable project templates in data science and big data with step-by-step walkthroughs,

arrow link

© 2024

© 2024 Iconiq Inc.

Privacy policy

User policy

Write for ProjectPro

eSoft Skills Global Training Solutions

Training for Financial Services

detailed financial analysis study

Financial Case Study Analysis

In the fast-paced world of finance, through the labyrinth of data can be akin to finding a needle in a haystack. However, with the right tools and methodologies, you can extract invaluable insights that drive strategic decision-making and foster business growth.

By diving into a financial case study analysis, you'll uncover not just numbers on a page but a narrative that reveals the true heartbeat of a company. Understanding this narrative can be the key to a treasure trove of opportunities and mitigating potential risks.

Key Takeaways

  • Evaluate financial health for informed decisions.
  • Identify areas for improvement and performance enhancement.
  • Compare metrics with industry peers for benchmarking.
  • Analyze trends, risks, and make data-driven strategic decisions.

Importance of Financial Analysis

Why is financial analysis essential for making informed business decisions?

Financial analysis plays a critical role in providing a thorough understanding of a company's financial health. Through financial health assessment, businesses can evaluate their current financial standing, identify areas of improvement, and make strategic decisions to enhance overall performance.

By conducting performance benchmarking, companies can compare their financial metrics with industry peers or competitors to gain insights into their relative position and identify areas where they may need to catch up or where they excel.

Profitability analysis is another key aspect of financial analysis that helps businesses assess the efficiency of their operations and identify opportunities to increase profitability. By analyzing various financial ratios and metrics, organizations can pinpoint areas where they can cut costs, optimize resources, or explore new investment opportunities to drive growth and enhance their financial performance.

Identifying Key Financial Indicators

To effectively assess a company's financial health and performance, it's fundamental to identify key financial indicators that provide valuable insights into its operational efficiency and profitability. Key ratios play an essential role in this assessment, offering a snapshot of various aspects of a company's financial status.

Here are four key financial indicators to contemplate when evaluating a company's financial health:

  • Profit Margin : This ratio indicates the company's profitability by showing how much profit it generates for each dollar of revenue.
  • Return on Investment (ROI) : ROI measures the return on an investment relative to its cost, providing insight into the efficiency of capital deployment.
  • Debt-to-Equity Ratio : This ratio reveals the proportion of debt and equity a company is using to finance its assets, indicating its financial leverage.
  • Current Ratio : The current ratio assesses the company's ability to cover its short-term liabilities with its short-term assets, reflecting its liquidity position.

Analyzing Financial Trends

When analyzing financial trends, you'll focus on:

  • Revenue growth analysis
  • Expense trend evaluation

These two vital points provide essential insights into the financial health and performance of a company. By examining these trends, you can identify patterns, make informed decisions, and drive strategic actions.

Revenue Growth Analysis

Analyzing revenue growth trends provides valuable insights into the financial performance and potential future success of a company. When examining revenue growth, consider the following:

  • Revenue Forecasting: Utilize historical data and market trends to predict future revenue streams accurately.
  • Competitive Analysis: Compare your revenue growth to industry competitors to evaluate your market position.
  • Market Segmentation: Identify which market segments are driving revenue growth for targeted strategies.
  • Pricing Strategy: Assess the impact of pricing changes on revenue growth and adjust strategies accordingly.

Expense Trend Evaluation

In evaluating financial trends, closely monitor and analyze expense trends to guarantee the company's operational efficiency and cost management practices. By examining expense reduction strategies and utilizing trend forecasting techniques, you can pinpoint areas where costs can be optimized.

Analyzing expense trends over time allows you to pinpoint any spikes or dips, enabling you to take proactive measures to maintain financial stability. Look for patterns in expenses and compare them to revenue trends to safeguard a balanced financial strategy.

Implementing effective cost control measures based on these analyses can lead to improved profitability and sustainability for the organization. Keep a keen eye on expense trends as they can provide valuable information for strategic decision-making and long-term financial health.

Evaluating Potential Risks

To assess the potential risks associated with the financial case study, identify and prioritize key risk factors that may impact the analysis. When evaluating potential risks in the financial case study, consider the following:

  • Risk Assessment : Begin by conducting a thorough risk assessment to identify all potential threats to the financial analysis process. This includes market risks, regulatory risks, and operational risks that could impact the outcomes.
  • Mitigation Strategies : Once risks are identified, develop effective mitigation strategies to address each risk factor. This may involve diversifying investments, implementing hedging strategies, or establishing contingency plans to minimize potential negative impacts.
  • Financial Health Evaluation : Evaluate the overall financial health of the organization under study to understand its resilience to different risk scenarios. This will help in determining the level of risk tolerance and the ability to withstand financial shocks.
  • Scenario Planning : Engage in scenario planning to simulate different risk scenarios and assess their potential impact on the financial analysis. By considering various outcomes, you can better prepare for uncertainties and make informed decisions.

Uncovering Insights for Decision Making

Explore the data to unearth key insights important for informed decision-making in the financial case study analysis. By delving into financial metrics and adopting an analytical approach, you can extract valuable decision insights with significant financial implications. Through a meticulous examination of the data points and trends, you can identify patterns, outliers, and correlations that provide a deeper understanding of the financial landscape under scrutiny.

Analyzing financial metrics such as revenue growth, profit margins, return on investment, and cash flow patterns can offer vital insights into the financial health and performance of the entity in question. These insights can guide decision-making processes, helping you make informed choices based on concrete data rather than intuition or speculation.

Strategic Guidance Through Analysis

When analyzing financial case studies, strategic planning tips serve as important pillars for decision-making processes.

By leveraging data-driven insights, you can navigate complexities and uncertainties with more clarity.

Strategic guidance through analysis empowers you to make informed choices that align with your long-term objectives.

Strategic Planning Tips

Utilize a structured approach to strategic planning by integrating key performance indicators with market trends analysis. When making strategic decisions and financial forecasts, consider the following tips:

  • Set Clear Goals: Define specific and measurable objectives aligned with the overall business strategy.
  • Evaluate Competitor Strategies: Analyze competitors' moves to anticipate market shifts and stay ahead.
  • Regularly Review KPIs: Monitor key performance indicators to track progress towards goals and adapt strategies accordingly.
  • Stay Agile: Be prepared to adjust plans swiftly in response to changing market conditions or unforeseen challenges.

Data-Driven Decision Making

To better align your strategic planning efforts with data-driven decision making, integrate key performance indicators with thorough market analysis for enhanced strategic guidance. Data analysis plays an important role in informing your decision-making process by providing valuable insights into market trends, customer behavior, and financial performance.

By utilizing data-driven approaches, you can identify opportunities for growth, pinpoint areas for improvement, and make informed decisions that are backed by evidence. Effective decision making hinges on the ability to interpret and leverage data effectively.

Incorporating data analysis into your strategic planning allows you to stay agile, responsive to market changes, and proactive in addressing challenges. Embracing a data-driven mindset empowers you to navigate complexities with confidence and make strategic choices that drive success.

Driving Business Success

Implementing a strategic approach to operations is imperative for driving business success in today's competitive market environment. To achieve this, consider the following key strategies:

  • Efficiency Enhancement : Streamlining processes and workflows can lead to performance improvement and cost reduction. Implementing automation and optimizing resource allocation can help maximize output while minimizing expenses.
  • Market Analysis : Conduct thorough market research to identify opportunities for growth and profit maximization. Understanding consumer needs and competitor strategies can provide insights for developing effective business plans.
  • Customer Relationship Management : Building strong relationships with customers can enhance loyalty and drive repeat business. Implementing customer feedback mechanisms and personalized services can lead to increased customer satisfaction and retention.
  • Employee Development : Investing in employee training and development can boost productivity and morale. Engaged and skilled employees are more likely to contribute positively to the company's overall success.

Practical Tips for Evaluation

To evaluate the effectiveness of the key strategies discussed in driving business success, practical tips for evaluation can provide valuable insights into the overall performance and impact of these initiatives.

Evaluation techniques play an important role in understanding the outcomes of implemented strategies. Conducting case studies can aid in appraising real-world applications of these strategies, offering practical insights into their success or areas needing improvement.

Utilizing financial metrics such as return on investment (ROI), profitability ratios, and cash flow analysis can provide a quantitative understanding of the impact on the financial health of the business. Decision support tools can assist in making informed choices based on performance analysis.

Benchmarking strategies against industry standards can offer a comparative perspective to gauge the effectiveness of the implemented strategies. By applying these practical evaluation methods, you can gain a thorough understanding of the success and areas of development within your business strategies.

Conducting Thorough Analysis

Conducting a thorough analysis involves delving deep into the data to extract meaningful insights that can drive informed decision-making and strategic planning within your business. To make sure your analysis is exhaustive and effective, consider the following steps:

  • Utilize Industry Benchmarks : Compare your financial data against industry standards to identify areas of strength and weakness. This benchmarking process can provide valuable context for evaluating your company's performance.
  • Perform Competitor Analysis : Analyze your competitors' financial statements and key performance metrics to understand how your business stacks up against industry rivals. This insight can help you identify opportunities for improvement and areas where you excel.
  • Identify Key Financial Ratios : Calculate and analyze important financial ratios such as profitability ratios, liquidity ratios, and leverage ratios. These ratios can offer valuable insights into your business's financial health and performance.
  • Consider Trend Analysis : Examine historical financial data to identify trends and patterns that can help you forecast future performance and make more informed decisions. Trend analysis can provide valuable insights into your business's trajectory and potential areas for growth.

As you navigate the financial landscape, remember that analysis is your compass, guiding you through the complexities of numbers and trends.

Just like a skilled chef tasting a dish to adjust the seasoning, your financial analysis allows you to fine-tune your decisions for success.

Trust in the power of data to steer you towards your goals and guarantee your financial journey is as smooth as a well-balanced recipe.

Similar Posts

Managing Credit Risk: A Comprehensive Guide

Managing Credit Risk: A Comprehensive Guide

Master the art of managing credit risk with this comprehensive guide, and unlock the key to financial stability and success.

Quant Trading Algorithms: Key Strategies Unveiled

Quant Trading Algorithms: Key Strategies Unveiled

Quantitative trading, also known as algorithmic trading, has revolutionized the financial industry by providing a systematic and data-driven approach to trading. It involves the use of mathematical models, statistical analysis, and computational algorithms to make trading decisions in financial markets. In this article, we will explore the world of quantitative trading and unveil the key…

Financial Model Presentation Tips

Financial Model Presentation Tips

Hone your financial model presentation skills with expert tips that will elevate your delivery and captivate your audience – find out how!

Quantitative Finance Certifications

Quantitative Finance Certifications

Intrigued by the world of quantitative finance certifications? Uncover the key to unlocking new career opportunities and expertise with the right certification.

Digital Banking Solutions: Trends and Innovations

Digital Banking Solutions: Trends and Innovations

Amidst the surge of digital banking innovations, a revolution is underway – discover the cutting-edge trends shaping the future of banking.

Career Development in Financial Services: Building a Path for Growth

Career Development in Financial Services: Building a Path for Growth

Stay ahead in the competitive world of financial services by mastering essential skills and strategies for career growth.

  • Azure Marketplace

Saxon

  • Search for:

The 6 Definitive Data Analytics Use Cases in Banking and Financial Services

Data Analytics Use Cases

Financial services organizations were traditionally product-centric, but they turned to be customer-centric with the evolving tech landscape. Digital transformation in financial services is more profound than leveraging data and digitization.

Personalization and customer experience now deepened after the pandemic. With the shift towards mobile, almost 89% of customers now prefer mobile banking channels , and digital-only banks are transcending traditional banks. Unlike financial services organizations, fintech start-ups leverage technology and data analytics as per customer preferences.

Intense competition and tech disruption are the game-changer for fintech. Let us talk about the example of loan disbursal. Banks have a lot of KYC and due diligence processes that delay the loan disbursal. But data analytics and AI make it easier for fintech start-ups to decide in minutes. Many leaders were leveraging compelling data analytics use cases in banking and financial services. But they have to be updated regularly with the evolving tech landscape.

Why do banks need to leverage advanced analytics?

Customer experience is the new competitive battleground for banks and financial services – In traditional banking business models, customer service was synonymous with customer experience. But now, ease of access, ease of use, and resolutions in no time seems to be the new face of customer experience. The financial services industry has more challenges with the data flow from these multiple channels with omnichannel presence.

AI is critical in the new CX – The applications are manifold in financial services – chatbots, AI-powered automation, and AI data analytics. Predicting the customer needs, providing services, and resolving queries in no time enhance customer experience, the new norm for financial services.

Data analytics is not about cutting costs but focusing on productivity – Do you know that leveraging advanced data analytics for fraud detection can save costs up to 20%? Earlier it was just automation of a document management system or repetitive processes. But now, it is more about leveraging technology for credit modeling, risk analysis, and fraud to leave humans for more critical projects.

Advanced Analytics in BFSI – Benefits

Updating the data analytics use cases in banking and financial services with the evolving data science methodologies can help organizations sustain stronger customer relationships. Let us look at a few more benefits of advanced analytics.

Customer 360-degree insights – By leveraging advanced analytics, financial services organizations can know more about customer preferences, multichannel touchpoints, and buyer behavior factors. There is a high chance that the sales folks might perceive a different need, but the data speaks another consumer behavior. Understanding the customer in detail is critical for banking and financial services , unlike other industries.

Personalized customer experience – Experts perceive personalization as another critical aspect in BFSI to reduce churn and improve revenues. Offering the right product at the right time while also reaching out with personalized information after understanding every consumer detail is now the norm for sales teams in BFSI. A report from Forrester says that a single point improvement in financial services organizations’ CX score can improve revenues from $5-$123 mn.

Reduction in operational costs – Banks and financial services organizations are under constant pressure to maintain sleek profit margins and improve operations. Financial services firms can leverage predicting analytics, visualization, and AI to automate their workflows. Replacing paper-based forms with digital applications and using NLP technologies where ever necessary also helps in reducing manual efforts and errors.

Risk mitigation – The main challenge for BFSI firms is to analyze risks like credit, claims, and fraud. Though the practice is not new, banks, insurance companies, and investment bankers need to update their risk approach with the evolving technologies and exploding data from multi-channels. Financial services organizations can modernize their risk management practices more efficiently using predictive, behavioral, and advanced analytics .

Competitive advantage – Fintech organizations with technology as their core are already disrupting financial services. Financial services organizations now need to adopt technology faster than before. Processing a loan application can be done in minutes with AI and advanced analytics, thereby providing more scope for customers. Data analytics in banking will enable you to understand the unmet customer needs and help you unfold new consumer-centric business models.

Best Definitive Data Analytics Use Cases in Banking and Financial Services

Most of us know about the data analytics use cases in banking and financial services. Do we need to view them differently after the pandemic? Our experts say that the customer data is changing rapidly, and so are the touchpoints. To successfully implement data analytics in banking, the models should reconsider all the available data from the expanded sources. Let us rethink the advanced analytics use cases with the changing consumer ecosystem.

Credit Modeling – Credit risk modeling is not new in the banking industry. The traditional risk analytics models provided insights based on income sources, loan history, default rates, credit rating, demographics, etc. Many other factors need to be analyzed in conjunction with the standard data. Let us consider the case of consumer loans; different dynamics like social media profiles, utility bills, monthly spending, and savings give more profound insights into the default risk. Unstructured data plays a vital role in credit risk modeling too. AI-based text analysis and consumer persona provide deeper insights into the customers’ financial well-being.

Risk Analysis and Monitoring – Banks and financial services organizations that implement dynamic risk models with advanced analytics seem to be more resilient to significant external changes. Risk models differ between Banks and financial services – credit risk, fraud, and liquidity risk are the major ones for banks; claims risk and fraud for insurance and portfolio risk analysis for investment bankers. The common risk for most financial services firms is fraud detection is continuously evolving. Machine learning, AI, and big data now enable organizations to analyze many transactions, not just based on historical data. Social media profiles, behavioral analytics, predictive analytics , and advanced machine learning models are leveraged collectively for fraud detection.

Customer LifeTime Value – The trickiest one but looks like the most simple one to understand for anyone in the banking perspective. Customer lifetime value provides insights about the future revenue sources from the customer to focus marketing efforts and reduce churn. It is tough to estimate how customer behaviors change with time and the significant factors impacting their decisions. AI-powered advanced models recognize patterns more effectively in the data to provide behavioral insights that humans may not be able to identify.

Product Recommendation Engine – Are we talking about retail? No, product recommendation engines are evolving in banking too. Multiple comparison sites are now available for each financial services product – loan, insurance, mutual funds, credit cards, etc. Consumers can make informed choices, but cross-selling financial products at the right time cater to customer needs and enhances trust. Machine learning models process data in real-time from various content feeds to make the job easier for financial/investment analysts to offer personalized products and services.

Customer segmentation and personalized marketing – Understanding every aspect of the customer is critical for personalization. Customers are now bombarded with different financial products at the same time. How do you know if a customer is looking for an auto loan? Does the customer intend to purchase a home or an automobile? The place and timing of your marketing efforts matter in creating trust and showing intent to act on the marketing messages. You can also reduce awareness marketing efforts if you provide the knowledge at the right stage of the buyer journey.

AI-powered Virtual Assistants – Consider the case of insurance; a loss or damage may not happen multiple times. It is the single touchpoint to show the customers how you care for them and ease the processes. Customers now prefer efficient self-service options to in-person contacts to process their requests. AI-powered virtual assistants add value in answering all the information queries about products, services, and eligibility criteria in financial services. They are also evolving to validate certain criteria based on the rules updated with the machine learning models. It wouldn’t be a surprise to see that an AI-powered assistant does the insurance claims processing in minutes.

Are you interested in more data and analytics use cases in banking and financial services? Get in touch with our experts to get the mind share.

Picture of Marketing Desk

Marketing Desk

Explore our services.

  • Modern Enterprise Apps
  • Intelligent Automation
  • Data Analytics and Insights
  • Azure Cloud Solutions
  • Modern Work
  • Digital & App Innovation

Get Started

Related Posts

  • Effect of Bad Data and Data Cleaning using Power Query
  • Understanding Data Warehousing and its role in Business Intelligence
  • Why do Enterprise Data Lake Projects fail?
  • AI and Automation
  • Business Applications
  • Cloud Infrastructure
  • Data and analytics
  • Latest Buzz
  • Microsoft Copilot

Saxon_Final logo with tagline

TAKE THE NEXT STEP

  • AI & Automation
  • Data, Analytics, and Insights
  • Azure Cloud Infrastructure
  • Digital & App Innovation
  • DigitalClerx
  • Case Studies
  • Infographics

Microsoft Solutions Partner - Infrastructure (Azure)

Get the insights that matter

Stay up-to-date with our latest news, updates, and promotions by subscribing to our newsletter.

Copyright © 2008-2023 Saxon. All rights reserved | Privacy Policy

Address: 1320 Greenway Drive Suite # 660, Irving, TX 75038

  • Gen AI consulting workshop
  • Gen AI for Business
  • Microsoft 365 Copilot
  • Gen AI for Sales
  • Low-Code/No-Code Development
  • Microsoft Teams App Development
  • Intelligent Business Apps
  • Microsoft Power Platform
  • App Modernization
  • Saxon Vision AI Suite
  • Process Intelligence
  • Intelligent Document Processing
  • Conversational AI
  • Robotic Process Automation
  • Intelligent Back Office Automation
  • Azure AI for Contact Centers
  • Data and Analytics
  • Microsoft Fabric Services
  • Data Engineering and Governance
  • Data Migration and Modernization
  • Data Visualization & Reporting
  • Data Science and Machine Learning
  • Azure Consulting Services
  • Azure Implementation Services
  • Azure Managed Services
  • Azure Cost Optimization
  • Copilot for Modern Work
  • Document Management System
  • Intelligent intranet
  • Intelligent enterprise search
  • Intelligent AI Apps
  • DevOps Implementation
  • DevOps Consulting Services
  • Application Migration
  • Azure Integration
  • Immersion Day
  • 3-Week Pilot Consulting Workshop
  • Governance Workshop
  • AI Use Case Discovery Workshop
  • Cognitive Services POC
  • Enterprise Automation Strategy
  • Intelligent Sales Nudges
  • Azure AI Health Bot
  • Yellowbrick
  • Automation Anywhere

The Amazon Case Study (New Edition)

case study on financial data analysis

  • What You'll Learn
  • Career Programs
  • What Students Say

The Amazon Case Study

Welcome to CFI’s advanced financial modeling course – a case study on how to value Amazon.com, Inc (AMZN).  This course is designed for professionals working in investment banking, corporate development, private equity, and other areas of corporate finance that deal with valuing companies and applying various methods of valuation.

case study on financial data analysis

Advanced Financial Modeling Course Objectives

This advanced financial modeling course has several objectives including:

  • Use Amazon’s financial statements to build an integrated 3-statement financial forecast
  • Learn how to structure an advanced valuation model effectively
  • Set up all the assumptions and drivers required to build out the financial forecast and DCF model
  • Create a 10-year forecast for Amazon’s business, including an income statement, balance sheet, cash flow statement, supporting schedules, and free cash flow to the firm (FCFF)
  • Learn how to deal with advanced topics like segmented revenue, capital additions, finance leases, operating leases, and more
  • Perform comparable company analysis (Comps) utilizing publicly available information
  • Perform a Sum-Of-The-Parts (SOTP) valuation of Amazon, as well as consider precedent transactions, equity research price targets, and Amazon’s 52-week trading range
  • Generate multiple operating scenarios to explore a range of outcomes and values for the business
  • Perform detailed sensitivity analysis on key assumptions and assess the overall impact on equity value per share

case study on financial data analysis

Amazon (AMZN) Case Study

This course is built on a case study of Amazon, where students are tasked with building a financial modeling and performing comparable company analysis to value AMZN shares and make an investment recommendation.

Through the course of the transaction, students will learn:

  • How to build a detailed financial forecast of Amazon
  • How to apply various valuation methodologies to derive an implied value for Amazon
  • How to develop an investment recommendation on the shares of Amazon
  • How to create a dashboard and summary output that highlights the most important information from the model

case study on financial data analysis

Why Take CFI’s Advanced Financial Modeling Course?

This course is perfect for anyone who wants to learn how to build a detailed financial model for a public company, from the bottom up. The video-based lessons will teach you all the formulas and functions to calculate things like segmented revenue, marketable securities, accrued expenses, unearned revenue, stock-based compensation, long-term debt, finance and operating leases, and much more.

In addition to learning the detailed mechanics of how to build the financial model for Amazon, students will also learn how to think about intrinsic value, and develop an investment recommendation.

case study on financial data analysis

What’s Included in the Advanced Modeling Course?

This advanced financial modeling and valuation course include all of the following:

  • Blank Amazon model template
  • Completed Amazon model template (dashboard, DCF model, Comps model, WACC analysis, scenarios, etc.)
  • 4+ hours of detailed video instruction
  • Certificate of completion

Recommended Preparatory Courses

We recommend you complete the following courses or possess the equivalent knowledge before taking this course:

  • Excel Fundamentals – Formulas for Finance
  • DCF Valuation Modeling
  • Comparable Valuation Analysis
  • 3-Statement Modeling

CFI advanced modeling amazon

This course is intended solely for educational and training purposes. The information contained herein does not constitute investment advice, or an offer to sell, or the solicitation of any offer to buy any securities of Amazon.com, Inc. (NasdaqGS: AMZN) or any other security.

This content in this course has not been approved or disapproved by (a) Amazon.com, Inc., (b) S&P Global Market Intelligence Inc., (c) any equity research analyst that covers Amazon.com, Inc., or (d) any securities regulator in any province or territory of Canada, the United States Securities and Exchange Commission or any other United States federal or state regulatory authority, and no such commission or authority has passed upon the merits, accuracy or adequacy of this content, nor is it intended that any will.

The information in this course does not constitute the provision of investment, tax, legal or other professional advice. As with all investments, there are associated risks and you could lose money investing – including, potentially, your entire investment. Prior to making any investment, a prospective investor should consult with its own investment, accounting, legal and tax advisers to evaluate independently the risks, consequences and suitability of that investment.

No reliance may be placed for any purpose on the information and opinions contained herein or their accuracy or completeness, and nothing contained herein may be relied upon in making any investment decision.

Approx 12.5h to complete

100% online and self-paced

What you'll learn

Introduction, building a financial forecast, implied value analysis, investment recommendation, course conclusion, qualified assessment, this course is part of the following programs.

Why stop here? Expand your skills and show your expertise with the professional certifications, specializations, and CPE credits you’re already on your way to earning.

Financial Modeling & Valuation Analyst (FMVA®) Certification

  • Skills Learned Financial modeling and valuation, sensitivity analysis, strategy
  • Career Prep Investment banking and equity research, FP&A, corporate development

What Our Members Say

Eugene uwizeyimana, muhammad ahsan khan, juan pablo mancera sosa, anvarjon abdulkhamidov, roman ruzymov, diana alves, ryan garousi, shuopeng cui, marc- aurel hountondji, ayush mishra, joshua mwendabai sepiso, fardeen soudagar, theodorus emil, marco atsushi, okoth tebby, joseph mensah, dennis graham, atinafu asefa, michelle eisenberg, aaron garrido, valentina popovska, beth pitzer, frequently asked questions.

Excel Fundamentals - Formulas for Finance

Create a free account to unlock this Template

Access and download collection of free Templates to help power your productivity and performance.

Already have an account? Log in

Supercharge your skills with Premium Templates

Take your learning and productivity to the next level with our Premium Templates.

Upgrading to a paid membership gives you access to our extensive collection of plug-and-play Templates designed to power your performance—as well as CFI's full course catalog and accredited Certification Programs.

Already have a Self-Study or Full-Immersion membership? Log in

Access Exclusive Templates

Gain unlimited access to more than 250 productivity Templates, CFI's full course catalog and accredited Certification Programs, hundreds of resources, expert reviews and support, the chance to work with real-world finance and research tools, and more.

Already have a Full-Immersion membership? Log in

Data Science in Finance: The Top 9 Use Cases

What are the most common applications of data science in the finance industry? Find out in this post.

There’s no question that big data has transformed our economy. Perhaps the best example of this is the disruption it’s had on the world’s finance sector. As one of the first industries to fully embrace big data, finance has used the digital revolution to go from strength to strength. They now offer everything from automated pricing to personalized online banking. And at the heart of all this change? Big data and data scientists. In tribute to these practical wonder wizards, let’s check out the top nine applications of data science in the finance industry .

1. Real-time stock market insights

Data’s role in the stock market has always been important, even before the digital age. Historically, keeping track of which shares to buy and sell meant analyzing past data by hand. This allowed investors to make the best possible decisions, but it was an imperfect approach. It didn’t take into account the volatility of the market, meaning traders could only use data that had been manually tracked and measured, combined with their personal intuition. Bad investment decisions using outdated data were, unsurprisingly, not uncommon.

Today, by leveraging technological advances, financial data scientists have (to all practical ends) eradicated this data latency, providing us with a constant stream of real-time insights. Using dynamic data pipelines, traders can now access stock market information as and when it happens. Tracking transactions in real-time, they can make much smarter decisions about which stocks to buy and sell, vastly reducing the margin of error. These real-time technologies have also had a knock-on effect across the financial sector, as we’ll see.

2. Algorithmic trading

The goal of stock market trading is to buy shares at a low price, before selling them on at a profit. This involves using past and present market trends to understand which stocks are likely to increase or reduce in price. To maximize profit, stock market traders have to get in there quickly, buying and selling shares before their competitors. This used to be done manually. However, with the arrival of big data and real-time insights, the landscape has been transformed. A consequence of real-time insights is the ability (and requirement) to trade far more quickly. Eventually, the speed of trading overtook what humans could manage.

Enter algorithmic trading. With machine learning algorithms trained using existing data, financial data scientists have created an entirely new type of trading: high-frequency trading (HFQ). Because the process is now completely automated, buying and selling can happen at lightning speeds. Indeed, the algorithms used are so unbelievably fast that they’ve led to a new practice in the market. Known as ‘co-location,’ this involves placing computers in data centers as close as physically possible to the stock market exchange (often on the same premises). This shaves mere fractions of a second off the time it takes to carry out a trade, but those fractions of a second keep investors ahead of the competition. Pretty incredible stuff!

3. Automated risk management

Financial risk management is all about protecting organizations from potential threats. The threats themselves can be wide-ranging and include things like credit risk (e.g. ‘is this customer going to default on their card payments?’) and market risk (e.g. ‘is the housing bubble going to burst?’). Other types include inflation risk, legal risk, and so on. Essentially, anything that might negatively impact a financial institution’s functioning or profit can be considered a risk.

In its base form, risk management involves three tasks: detecting risks, monitoring risks, and prioritizing which risks to deal with most urgently. This might sound straightforward, but once you consider all the risk factors and how they intersect, it quickly becomes highly complex. Getting it right can be the difference between success and financial ruin. Unsurprisingly, then, data scientists have a key role to play in solving these problems, and they have leveraged machine learning (ML) to do so.

By automating the identification, monitoring, and prioritization of risk, ML algorithms minimize the scope for human error. They also take into account a huge variety of different data sources (from financial data to market data and customer social media) measuring how these different sources impact one another. Getting this right has become an art form. To illustrate, credit card firms using automated risk management software can now accurately determine a potential customer’s trustworthiness, even if they lack the customer’s comprehensive financial background.

A benefit of these algorithms is that they improve as they grow. AI-based risk management and smart underwriting can make connections that human beings alone would never spot. This is the power of machine learning. While these approaches are relatively new in the financial industry, their potential for the future is huge.

4. Fraud detection

Financial fraud comes in many forms: credit card fraud, inflated insurance claims, and organized crime, to name a few. Keeping on top of fraud is vital for any financial institution. This is not just about minimizing financial losses; it’s also about trust. Banks have a responsibility to ensure that their customers’ money is secure.

Once again, real-time analytics comes to the rescue. Using data mining and artificial intelligence (AI), data scientists can detect anomalies or unusual patterns as they occur. Specially-designed algorithms then alert the institution to the anomalous behavior and automatically block the suspicious activity. The most obvious example of this is credit card fraud. For instance, if your card gets used in an unusual location, or withdrawals are made in a pattern matching that commonly used by fraudsters, the credit card company can block the card and inform you that something is wrong before you even know it.

While detecting this type of outlier behavior is useful to individuals like you and me, fraud detection goes much further. Machine learning can also spot broader patterns of anomalous behavior, e.g. different organizations being hacked simultaneously. This can help banks identify cyber-attacks and organized crime, potentially saving them millions.

5. Consumer analytics

For any bank or financial services provider, understanding customer behavior is vital for making the right decisions. And the best way to understand customers? You got it: through their data. Financial data scientists increasingly use market segmentation (breaking down customers into granular demographics) to create highly sophisticated profiles. Combining various data sources and using demographics like age and geographic location, banks, insurance companies, pension funds, and credit card firms can gain very precise insights.

Using these insights, they can tailor their direct marketing and customer relationship management approach accordingly. This might involve using data to upsell particular products or to improve customer service.

Customer analytics also allows organizations to determine what’s known as the ‘customer lifetime value,’ a metric that predicts the net profit a customer will provide across all past, present, and future interactions with the organization. If this value is high, you can bet customers will be well cared for! This is a good reminder that, while the customer may always be right, insights gleaned from their data are regularly used to benefit the business, too!

6. Personalized services

Before the internet, people had to do all their banking in a physical bank. This seems completely inefficient by today’s standards, but it did mean that people got to know their bank manager. However, as the customer experience moved online, this relationship became much more transactional. That personal touch got lost. How to remain personal and relevant in the digital age has been a longstanding problem for banks. But once again, data analytics comes to the rescue!

A happy client is good for business, and that’s why personalized services focus on customer care. As you’ll know if you’ve ever used online banking, there are tonnes of personalized services available. And these are driven by data. They can be divided into three types.

The first is prescriptive personalization. This uses past customer data and preferences to anticipate what they need. It’s generally driven by rule-based algorithms that respond to customer interactions.

The second type is real-time personalization. This relies on both past and present data to tailor the customer experience as it’s happening (for example, if you’re recommended a product or service as you’re carrying out an online transaction).

The final type is machine learning personalization. Although this is a relatively new concept, it already has cool potential. A great example is the fintech software,  wallet.AI , which uses your financial profile and transaction history to act as a personal advisor on your daily spending. Great if you’re not so good with money. What might the future hold?

7. Pricing and revenue optimization

Pricing optimization is the ability to shape pricing based on the context in which customers encounter it. Most banks and insurance providers have large sales teams, offering complex webs of different products and services. If they work in isolation, they can often be unaware of products available elsewhere in the business. And because they’re usually driven by the bottom line, it can be easy for sales teams to fall back on personal experience rather than data-driven insights.

Using a variety of data from sources such as surveys, past product pricing, and sales histories, financial data scientists can help drive profit and save headaches for these sales teams.

How does this work in practice? Well, advanced machine learning analytics can carry out tests on various scenarios (e.g. whether to bundle services together or to sell them individually) allowing teams to produce smarter strategies. Financial data scientists will also ensure these algorithms integrate effectively with an organization’s systems, drawing data as necessary to automate much of the process. This means salespeople can do what they do best: sell! While pricing optimization may sound cynical, it ultimately gives customers what they want (good value) while maximizing profit for the company. Everybody wins.

8. Product development

One of the fastest-growing uses of data science in the finance industry comes from fintech (financial technology) providers. This nascent area of the industry has only emerged in recent years, but has been quick to take advantage of the sluggish pace of change prevalent in larger, more rigid financial organizations (such as older banks). Sweeping in with a disruptive start-up mentality, fintech companies are offering exciting innovations at a much faster pace than global organizations can manage.

While many fintech providers have launched digital banks, others focus on specific areas of technology, before selling these on. Blockchain and cryptocurrency, mobile payment platforms, analytics-driven trading apps, lending software, and AI-based insurance products are just a few examples of fintech that is driven by data science.

9. General data management

As mentioned, financial institutions have access to huge amounts of data. The potential sources are vast: mobile interactions, social media data, cash transactions, marketplace reports…you get the idea. It’s not something many people think about, but besides the social media giants, the finance sector has access to more of our data than probably any other industry. Harnessed properly, these goldmines of data can provide invaluable financial business intelligence. But harnessing these data properly is half of the challenge.

While the majority of these data are digitized, most lack any structure at all. And with real-time data constantly streaming in, bringing order to this chaos is a headache. While numbers one to eight on our list explored the flashy results of this data science journey, data management in finance is a huge task in itself. It requires teams of data experts who can build data warehouses, mine data, understand the complexities of the industry, and do all this while developing novel approaches to working with it. Data engineers and data architects (who manage data itself) are vital to effective financial data management.

Wrap-up and further reading

In this post, we explored the nine top uses of data science in the finance sector. As we’ve learned, increasingly precise statistical techniques and modern technologies have transformed the way the finance industry works. And they will continue to do so.

If you’re interested in a career in data analytics, this offers a small taste of the huge array of areas you could work in, even within a single industry. To discover more about data analytics, try a  free, 5-day data analytics short course . You can also read the following posts for more industry insights:

  • Where could a career in data take you?
  • What’s the difference between data science, data analytics, and machine learning?
  • What’s the difference between a data scientist and a data engineer?
  • Search Search Please fill out this field.

What Is Financial Analysis?

  • How It Works

Corporate Financial Analysis

Investment financial analysis, types of financial analysis, horizontal vs. vertical analysis, the bottom line.

  • Corporate Finance
  • Financial statements: Balance, income, cash flow, and equity

Financial Analysis: Definition, Importance, Types, and Examples

case study on financial data analysis

Financial analysis is the process of evaluating businesses, projects, budgets, and other finance-related transactions to determine their performance and suitability. Typically, financial analysis is used to analyze whether an entity is stable, solvent, liquid, or profitable enough to warrant a monetary investment.

Key Takeaways

  • If conducted internally, financial analysis can help fund managers make future business decisions or review historical trends for past successes.
  • If conducted externally, financial analysis can help investors choose the best possible investment opportunities.
  • Fundamental analysis and technical analysis are the two main types of financial analysis.
  • Fundamental analysis uses ratios and financial statement data to determine the intrinsic value of a security.
  • Technical analysis assumes a security's value is already determined by its price, and it focuses instead on trends in value over time.

Investopedia / Nez Riaz

Understanding Financial Analysis

Financial analysis is used to evaluate economic trends, set financial policy, build long-term plans for business activity, and identify projects or companies for investment.

This is done through the synthesis of financial numbers and data. A financial analyst will thoroughly examine a company's financial statements—the income statement, balance sheet, and cash flow statement. Financial analysis can be conducted in both corporate finance and investment finance settings.

One of the most common ways to analyze financial data is to calculate ratios from the data in the financial statements to compare against those of other companies or against the company's own historical performance.

For example, return on assets (ROA) is a common ratio used to determine how efficient a company is at using its assets and as a measure of profitability. This ratio could be calculated for several companies in the same industry and compared to one another as part of a larger analysis.

There is no single best financial analytic ratio or calculation. Most often, analysts use a combination of data to arrive at their conclusions.

In corporate finance, the analysis is conducted internally by the accounting department and shared with management in order to improve business decision-making. This type of internal analysis may include ratios such as net present value (NPV) and internal rate of return (IRR) to find projects worth executing.

Many companies extend credit to their customers. As a result, the cash receipt from sales may be delayed for a period of time. For companies with large receivable balances, it is useful to track days sales outstanding (DSO), which helps the company identify the length of time it takes to turn a credit sale into cash. The average collection period is an important aspect of a company's overall cash conversion cycle .

A key area of corporate financial analysis involves extrapolating a company's past performance, such as net earnings or profit margin, into an estimate of the company's future performance. This type of historical trend analysis is beneficial to identify seasonal trends.

For example, retailers may see a drastic upswing in sales in the few months leading up to Christmas. This allows the business to forecast budgets and make decisions, such as necessary minimum inventory levels, based on past trends.

In investment finance, an analyst external to the company conducts an analysis for investment purposes. Analysts can either conduct a top-down or bottom-up investment approach.

A top-down approach first looks for macroeconomic opportunities, such as high-performing sectors, and then drills down to find the best companies within that sector. From this point, they further analyze the stocks of specific companies to choose potentially successful ones as investments by looking last at a particular company's fundamentals.

A bottom-up approach, on the other hand, looks at a specific company and conducts a similar ratio analysis to the ones used in corporate financial analysis, looking at past performance and expected future performance as investment indicators.

Bottom-up investing forces investors to consider microeconomic factors first and foremost. These factors include a company's overall financial health, analysis of financial statements, the products and services offered, supply and demand, and other individual indicators of corporate performance over time.

Financial analysis is only useful as a comparative tool. Calculating a single instance of data is usually worthless; comparing that data against prior periods, other general ledger accounts, or competitor financial information yields useful information.

There are two types of financial analysis as it relates to equity investments: fundamental analysis and technical analysis.

Fundamental Analysis

Fundamental analysis uses ratios gathered from data within the financial statements, such as a company's earnings per share (EPS), in order to determine the business's value.

Using ratio analysis in addition to a thorough review of economic and financial situations surrounding the company, the analyst is able to arrive at an intrinsic value for the security. The end goal is to arrive at a number that an investor can compare with a security's current price in order to see whether the security is undervalued or overvalued.

Technical Analysis

Technical analysis uses statistical trends gathered from trading activity, such as moving averages (MA).

Essentially, technical analysis assumes that a security’s price already reflects all publicly available information and instead focuses on the statistical analysis of price movements. Technical analysis attempts to predict market movements by looking for patterns and trends in stock prices and volumes rather than analyzing a security’s fundamental attributes.

When reviewing a company's financial statements, two common types of financial analysis are horizontal analysis and vertical analysis . Both use the same set of data, though each analytical approach is different.

Horizontal analysis entails selecting several years of comparable financial data. One year is selected as the baseline, often the oldest. Then, each account for each subsequent year is compared to this baseline, creating a percentage that easily identifies which accounts are growing (hopefully revenue) and which accounts are shrinking (hopefully expenses).

Vertical analysis entails choosing a specific line item benchmark, and then seeing how every other component on a financial statement compares to that benchmark.

Most often, net sales are used as the benchmark. A company would then compare the cost of goods sold, gross profit, operating profit, or net income as a percentage of this benchmark. Companies can then track how the percentage changes over time.

Examples of Financial Analysis

In Q1 2024, Amazon.com reported a net income of $10.4 billion. This was a substantial increase from one year ago when the company reported a net income of $3.2 billion in Q1 2023.

Analysts can use the information above to perform corporate financial analysis. For example, consider Amazon's operating profit margins below, which can be calculated by dividing operating income by net sales.

  • 2024: $15,307 / $143,313 = 10.7%
  • 2023: $4,774 / $127,358 = 3.7%

From Q1 2023 to Q1 2024, the company experienced an increase in operating margin, allowing for financial analysis to reveal that the company earned more operating income for every dollar of sales.

Why Is Financial Analysis Useful?

The financial analysis aims to analyze whether an entity is stable, liquid, solvent, or profitable enough to warrant a monetary investment. It is used to evaluate economic trends, set financial policies, build long-term plans for business activity, and identify projects or companies for investment.

How Is Financial Analysis Done?

Financial analysis can be conducted in both corporate finance and investment finance settings. A financial analyst will thoroughly examine a company's financial statements—the income statement, balance sheet, and cash flow statement.

One of the most common ways to analyze financial data is to calculate ratios from the data in the financial statements to compare against those of other companies or against the company's own historical performance. A key area of corporate financial analysis involves extrapolating a company's past performance, such as net earnings or profit margin, into an estimate of the company's future performance.

What Techniques Are Used in Conducting Financial Analysis?

Analysts can use vertical analysis to compare each component of a financial statement as a percentage of a baseline (such as each component as a percentage of total sales). Alternatively, analysts can perform horizontal analysis by comparing one baseline year's financial results to other years.

Many financial analysis techniques involve analyzing growth rates including regression analysis, year-over-year growth, top-down analysis, such as market share percentage, or bottom-up analysis, such as revenue driver analysis .

Lastly, financial analysis often entails the use of financial metrics and ratios. These techniques include quotients relating to the liquidity, solvency, profitability, or efficiency (turnover of resources) of a company.

What Is Fundamental Analysis?

Fundamental analysis uses ratios gathered from data within the financial statements, such as a company's earnings per share (EPS), in order to determine the business's value. Using ratio analysis in addition to a thorough review of economic and financial situations surrounding the company, the analyst is able to arrive at an intrinsic value for the security. The end goal is to arrive at a number that an investor can compare with a security's current price in order to see whether the security is undervalued or overvalued.

What Is Technical Analysis?

Technical analysis uses statistical trends gathered from market activity, such as moving averages (MA). Essentially, technical analysis assumes that a security’s price already reflects all publicly available information and instead focuses on the statistical analysis of price movements. Technical analysis attempts to understand the market sentiment behind price trends by looking for patterns and trends rather than analyzing a security’s fundamental attributes.

Financial analysis is a cornerstone of making smarter, more strategic decisions based on the underlying financial data of a company.

Whether corporate, investment, or technical analysis, analysts use data to explore trends, understand growth, seek areas of risk, and support decision-making. Financial analysis may include investigating financial statement changes, calculating financial ratios, or exploring operating variances.

U.S. Securities and Exchange Commission. " Amazon.com Form 10-Q for the Quarter Ended March, 31, 2024 ," Page 4.

case study on financial data analysis

  • Terms of Service
  • Editorial Policy
  • Privacy Policy

Introduction: Ratio analysis is a powerful tool in financial analysis, providing insights into a company's performance. This guide will explore the applicati

Finance Health

Ratio Analysis

Invest Decision

Real Life Case

Invest Strategy

Business Metric

Finance Data

Exploring Ratio Analysis Through Real-Life Case Studies

Dec 4, 2023 5:54 AM - Parth Sanghvi

blog post cover photo

Image credit: Austin Distel

Introduction:.

Ratio analysis is a powerful tool in financial analysis, providing insights into a company's performance. This guide will explore the application of ratio analysis through diverse case studies, showcasing its significance and practical implications in decision-making.

Understanding Ratio Analysis:

Ratio analysis involves the examination of various financial ratios to evaluate a company's financial health, performance, and operational efficiency. Key ratios include liquidity, profitability, solvency, and efficiency ratios.

Importance of Case Studies in Ratio Analysis:

Real-life case studies offer practical demonstrations of how ratio analysis influences decision-making and provides actionable insights for investors, analysts, and businesses.

Case Study Examples:

Liquidity Ratio Impact on Small Business:

  • Analyzing the current ratio and quick ratio of a small business to assess its ability to meet short-term obligations during a cash crunch.

Profitability Ratios in Tech Companies:

  • Comparing net profit margin and ROE among tech giants to identify profitability leaders in the industry.

Solvency Ratio Impact in the Retail Sector:

  • Analyzing debt-to-equity ratios in the retail sector during economic downturns to evaluate resilience and risk management strategies.

Efficiency Ratios and Manufacturing Operations:

  • Assessing inventory turnover ratios and receivables turnover ratios in manufacturing firms to streamline operational efficiency.

Valuation Ratios in Investment Decisions:

  • Using P/E ratios and P/B ratios to make informed investment decisions in different sectors based on market sentiment.

Lessons Learned from Case Studies:

Holistic Evaluation: How combining multiple ratios provides a comprehensive view of a company's performance.

Industry Benchmarking: The significance of benchmarking ratios against industry averages for accurate comparative analysis.

Impact on Decision-making: How ratio analysis influences investment, strategic, and operational decisions.

Leveraging Insights from Ratio Analysis:

Continuous Monitoring: Regularly reviewing ratios to detect trends and identify areas needing improvement.

Predictive Analysis: Using historical data from ratios to forecast future performance and trends.

Conclusion:

Ratio analysis case studies provide actionable insights and practical applications for businesses and investors. Learning from these real-life examples empowers stakeholders to make informed decisions based on a thorough understanding of financial ratios.

Other Blogs

Jan 16, 2024 4:18 PM - Samuel Abdelshahid

Budget-Friendly Trading Laptops: Maximizing Value without Compromising Performance

In the hustle and bustle of the trading world, having a trustworthy laptop is like having a reliable partner by your side. Making well-informed decisions and staying ahead of market trends become second nature with the right device.  However, the quest for a budget-friendly trading laptop t...

blog post title

Jan 21, 2024 4:00 AM - Parth Sanghvi

Understanding Profitability Metrics: Exploring ROE, ROA, and Net Profit Margin

Introduction: In the world of financial analysis, a profound grasp of essential profitability metrics is vital. This blog delves into three pivotal metrics—ROE (Return on Equity), ROA (Return on Assets), and Net Profit Margin—offering clear insights without unnecessary complexity. Exploring RO...

blog post title

May 14, 2024 11:41 AM - Sanzhi Kobzhan

The easiest way to calculate stock’s target price and why the target price is important.

A stock's target price, also known as its fair value, is an indication of what a share can cost based on the company’s forecasted financial statements. It is important to know a stock's fair value to find undervalued stocks with great growth potential. Let's consider how investment analysts calculat...

blog post title

Data Analysis in Finance Management

  • January 2024

Seraphina Brightwood at Ladoke Akintola University of Technology

  • Ladoke Akintola University of Technology

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations

Joaja Ajayi

  • Kaddour Tani
  • Muhammad Khan
  • Ruoyu Jahanzeb
  • Daniel Wang
  • Guoqiang Sun
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

IEEE Account

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

Lean Six Sigma Training Certification

6sigma.us

  • Facebook Instagram Twitter LinkedIn YouTube
  • (877) 497-4462

SixSigma.us

Quantitative Data Analysis Methods. Applications, Methods, and Case Studies

August 29th, 2024

The ability to properly analyze and understand numbers has become very valuable, especially in today’s time.

Analyzing numerical data systematically involves thoughtfully collecting, organizing, and studying data to discover patterns, trends, and connections that can guide important choices.  

Key Highlights

  • Analyzing data numerically involves gathering info, organizing it neatly, and examining the numbers to gain insights and make choices informed by data.
  • It involves various methods like descriptive statistics, predictive modeling, machine learning, and other statistical techniques. These help make sense of everything.
  • For businesses, researchers, and organizations, it’s important to analyze numbers to spot patterns, relationships, and how things change over time within their info.
  • Doing analyses allows for data-driven decision-making, projecting outcomes, assessing risks intelligently, and refining strategies and workflows. Finding meaning in the metrics helps optimize processes.

What is Quantitative Data Analysis?

Analyzing numbers is useful for learning from information. It applies stats methods and computational processes to study and make sense of data so you can spot patterns, connections, and how things change over time – giving insight to guide decisions.

At the core, quantitative analysis builds on math and stats fundamentals to turn raw figures into meaningful knowledge.

The process usually starts with gathering related numbers and organizing them neatly. Then analysts use different statistical techniques like descriptive stats, predictive modeling, and more to pull out valuable lessons.

Descriptive stats provide a summary of the key details, like averages and how spread out the numbers are. This helps analysts understand the basics and find any weird outliers.

Inferential stats allow analysts to predict broader trends based on a sample. Things like hypothesis testing , regression analysis, and correlation investigations help identify significant relationships.

Machine learning and predictive modeling have also enhanced working with numbers. These sophisticated methods let analysts create models that can forecast outcomes, recognize patterns across huge datasets, and uncover hidden insights beyond basic stats alone.

Leveraging data-based evidence supports more informed management of resources.

Data Collection and Preparation

The first step in any quantitative data analysis is collecting the relevant data. This involves determining what data is needed to answer the research question or business objective.

Data can come from a variety of sources such as surveys, experiments, observational studies, transactions, sensors, and more. 

Once the data is obtained, it typically needs to go through a data preprocessing or data cleaning phase.

Real-world data is often messy, containing missing values, errors, inconsistencies, and outliers that can negatively impact the analysis if not handled properly. Common data cleaning tasks include:

  • Handling missing data through imputation or case deletion
  • Identifying and treating outliers 
  • Transforming variables (e.g. log transformations)
  • Encoding categorical variables
  • Removing duplicate observations

The goal of data cleaning is to ensure that quantitative data analysis techniques can be applied accurately to high-quality data. Proper data collection and preparation lays the foundation for reliable results.

In addition to cleaning, the data may need to be structured or formatted in a way that statistical software and data analysis tools can read it properly.

For large datasets, data management principles like establishing data pipelines become important.

Descriptive Statistics of Quantitative Data Analysis

Descriptive statistics is a crucial aspect of quantitative data analysis that involves summarizing and describing the main characteristics of a dataset.

This branch of statistics aims to provide a clear and concise representation of the data, making it easier to understand and interpret.

Descriptive statistics are typically the first step in analyzing data, as they provide a foundation for further statistical analyses and help identify patterns, trends, and potential outliers.

The most common descriptive statistics measures include:

  • Mean : The arithmetic average of the data points.
  • Median : The middle value in a sorted dataset.
  • Mode : The value that occurs most frequently in the dataset.
  • Range : The difference between the highest and lowest values in the dataset.
  • Variance : The average of the squared deviations from the mean.
  • Standard Deviation : The square root of the variance, providing a measure of the spread of data around the mean.
  • Histograms : Visual representations of the distribution of data using bars.
  • Box Plots : Graphical displays that depict the distribution’s median, quartiles, and outliers.
  • Scatter Plots : Displays the relationship between two quantitative variables.

Descriptive statistics play a vital role in data exploration and understanding the initial characteristics of a dataset.

They provide a summary of the data, allowing researchers and analysts to identify patterns, detect potential outliers, and make informed decisions about further analyses.

However, it’s important to note that descriptive statistics alone do not provide insights into the underlying relationships or causal mechanisms within the data.

To draw meaningful conclusions and make inferences about the population, inferential statistics and advanced analytical techniques are required.

Inferential Statistics

While descriptive statistics provide a summary of data, inferential statistics allow you to make inferences and draw conclusions from that data.

Inferential statistics involve taking findings from a sample and generalizing them to a larger population. This is crucial when it is impractical or impossible to study an entire population.

The core of inferential statistics revolves around hypothesis testing . A hypothesis is a statement about a population parameter that needs to be evaluated based on sample data.

The process involves formulating a null and alternative hypothesis, calculating an appropriate test statistic, determining the p-value, and making a decision whether to reject or fail to reject the null hypothesis.

Some common inferential techniques include:

T-tests – Used to determine if the mean of a population differs significantly from a hypothesized value or if the means of two populations differ significantly.

ANOVA ( Analysis of Variance ) – Used to determine if the means of three or more groups are different.  

Regression analysis – Used to model the relationship between a dependent variable and one or more independent variables. This allows you to understand drivers and make predictions.

Correlation analysis – Used to measure the strength and direction of the relationship between two variables.

Inferential statistics are critical for quantitative research, allowing you to test hypotheses, establish causality, and make data-driven decisions with confidence in the findings.

However, the validity depends on meeting the assumptions of the statistical tests and having a properly designed study with adequate sample sizes.

The interpretation of inferential statistics requires care. P-values indicate the probability of obtaining the observed data assuming the null hypothesis is true – they do not confirm or deny the hypothesis directly. Effect sizes are also crucial for assessing the practical significance beyond just statistical significance.

Predictive Modeling and Machine Learning

Quantitative data analysis goes beyond just describing and making inferences about data – it can also be used to build predictive models that forecast future events or behaviors.

Predictive modeling uses statistical techniques to analyze current and historical data to predict unknown future values. 

Some of the key techniques used in predictive modeling include regression analysis , decision trees , neural networks, and other machine learning algorithms.

Regression analysis is used to understand the relationship between a dependent variable and one or more independent variables.

It allows you to model that relationship and make predictions. More advanced techniques like decision trees and neural networks can capture highly complex, non-linear relationships in data.

Machine learning has become an integral part of quantitative data analysis and predictive modeling. Machine learning algorithms can automatically learn and improve from experience without being explicitly programmed.

They can identify hidden insights and patterns in large, complex datasets that would be extremely difficult or impossible for humans to find manually.

Some popular machine learning techniques used for predictive modeling include:

  • Supervised learning (decision trees, random forests, support vector machines)
  • Unsupervised learning ( k-means clustering , hierarchical clustering) 
  • Neural networks and deep learning
  • Ensemble methods (boosting, bagging)

Predictive models have a wide range of applications across industries, from forecasting product demand and sales to identifying risk of customer churn to detecting fraud.

With the rise of big data , machine learning is becoming increasingly important for building accurate predictive models from large, varied data sources.

Quantitative Data Analysis Tools and Software

To effectively perform quantitative data analysis, having the right tools and software is essential. There are numerous options available, ranging from open-source solutions to commercial platforms.

The choice depends on factors such as the size and complexity of the data, the specific analysis techniques required, and the budget.

Statistical Software Packages

  • R : A powerful open-source programming language and software environment for statistical computing and graphics. It offers a vast collection of packages for various data analysis tasks.
  • Python : Another popular open-source programming language with excellent data analysis capabilities through libraries like NumPy, Pandas, Matplotlib, and sci-kit-learn.
  • SPSS : A commercial software package widely used in academic and research settings for statistical analysis, data management, and data documentation.
  • SAS : A comprehensive software suite for advanced analytics, business intelligence, data management, and predictive analytics.
  • STATA : A general-purpose statistical software package commonly used in research, especially in the fields of economics, sociology, and political science.

Spreadsheet Applications

  • Microsoft Excel : A widely used spreadsheet application that offers built-in statistical functions and data visualization tools, making it suitable for basic data analysis tasks.
  • Google Sheets : A free, web-based alternative to Excel, offering similar functionality and collaboration features.

Data Visualization Tools

  • Tableau : A powerful data visualization tool that allows users to create interactive dashboards and reports, enabling effective communication of quantitative data.
  • Power BI : Microsoft’s business intelligence platform that combines data visualization capabilities with data preparation and data modeling features.
  • Plotly : A high-level, declarative charting library that can be used with Python, R, and other programming languages to create interactive, publication-quality graphs.

Business Intelligence (BI) and Analytics Platforms

  • Microsoft Power BI : A cloud-based business analytics service that provides data visualization, data preparation, and data discovery capabilities.
  • Tableau Server/Online : A platform that enables sharing and collaboration around data visualizations and dashboards created with Tableau Desktop.
  • Qlik Sense : A data analytics platform that combines data integration, data visualization, and guided analytics capabilities.

Cloud-based Data Analysis Platforms

  • Amazon Web Services (AWS) Analytics Services : A suite of cloud-based services for data analysis, including Amazon Athena, Amazon EMR, and Amazon Redshift.
  • Google Cloud Platform (GCP) Data Analytics : GCP offers various data analytics tools and services, such as BigQuery, Dataflow, and Dataprep.
  • Microsoft Azure Analytics Services : Azure provides a range of analytics services, including Azure Synapse Analytics, Azure Data Explorer, and Azure Machine Learning.

Applications of Quantitative Data Analysis

Quantitative data analysis techniques find widespread applications across numerous domains and industries. Here are some notable examples:

Business Analytics

Businesses rely heavily on quantitative methods to gain insights from customer data, sales figures, market trends, and operational metrics.

Techniques like regression analysis help model customer behavior, while clustering algorithms enable customer segmentation. Forecasting models allow businesses to predict future demand, inventory needs, and revenue projections.

Healthcare and Biomedical Research with Quantitative Data Analysis

Analysis of clinical trial data, disease prevalence statistics, and patient outcomes employs quantitative methods extensively.

Hypothesis testing determines the efficacy of new drugs or treatments. Survival analysis models patient longevity. Data mining techniques identify risk factors and detect anomalies in healthcare data.

Marketing and Consumer Research

Marketing teams use quantitative data from surveys, A/B tests, and online behavior tracking to optimize campaigns. Regression models predict customer churn or likelihood to purchase.

Sentiment analysis derives insights from social media data and product reviews. Conjoint analysis determines which product features impact consumer preferences.

Finance and Risk Management with Quantitative Data Analysis

Quantitative finance relies on statistical models for portfolio optimization, derivative pricing, risk quantification, and trading strategy formulation. Value at Risk (VaR) models assess potential losses. Monte Carlo simulations evaluate the risk of complex financial instruments.

Social and Opinion Research

From political polls to consumer surveys, quantitative data analysis techniques like weighting, sampling, and survey data adjustment are critical. Researchers employ methods like factor analysis, cluster analysis, and structural equation modeling .

Case Studies

Case study 1: netflix’s data-driven recommendations.

Netflix extensively uses quantitative data analysis, particularly machine learning, to drive its recommendation engine.

By mining user behavior data and combining it with metadata about movies and shows, they build predictive models to accurately forecast what a user would enjoy watching next.

Case Study 2: Moneyball – Analytics in Sports

The adoption of sabermetrics and analytics by baseball teams like the Oakland Athletics, as depicted in the movie Moneyball, revolutionized player scouting and strategy.

By quantifying player performance through new statistical metrics, teams could identify undervalued talent and gain a competitive edge.

Quantitative data analysis is a powerful toolset that allows organizations to derive valuable insights from their data to make informed decisions.

By applying the various techniques and methods discussed, such as descriptive statistics, inferential statistics , predictive modeling , and machine learning, businesses can gain a competitive edge by uncovering patterns, trends, and relationships hidden within their data.

However, it’s important to note that quantitative data analysis is not a one-time exercise. As businesses continue to generate and collect more data, the analysis process should be an ongoing, iterative cycle.

If you’re looking to further enhance their quantitative data analysis capabilities, there are several potential next steps to consider:

  • Continuous learning and skill development : The field of data analysis is constantly evolving, with new statistical methods, modeling techniques, and software tools emerging regularly. Investing in ongoing training and education can help analysts stay up-to-date with the latest advancements and best practices.
  • Investing in specialized tools and infrastructure : As data volumes continue to grow, organizations may need to invest in more powerful data analysis tools, such as big data platforms, cloud-based solutions, or specialized software packages tailored to their specific industry or use case.
  • Collaboration and knowledge sharing : Fostering a culture of collaboration and knowledge sharing within the organization can help analysts learn from each other’s experiences, share best practices, and collectively improve the organization’s analytical capabilities.
  • Integrating qualitative data : While this article has focused primarily on quantitative data analysis, incorporating qualitative data sources, such as customer feedback, social media data, or expert opinions, can provide additional context and enrich the analysis process.
  • Ethical considerations and data governance : As data analysis becomes more prevalent, it’s crucial to address ethical concerns related to data privacy, bias, and responsible use of analytics.

Implementing robust data governance policies and adhering to ethical guidelines can help organizations maintain trust and accountability.

SixSigma.us offers both Live Virtual classes as well as Online Self-Paced training. Most option includes access to the same great Master Black Belt instructors that teach our World Class in-person sessions. Sign-up today!

Virtual Classroom Training Programs Self-Paced Online Training Programs

SixSigma.us Accreditation & Affiliations

PMI-logo-6sigma-us

Monthly Management Tips

  • Be the first one to receive the latest updates and information from 6Sigma
  • Get curated resources from industry-experts
  • Gain an edge with complete guides and other exclusive materials
  • Become a part of one of the largest Six Sigma community
  • Unlock your path to become a Six Sigma professional

" * " indicates required fields

The Case Centre logo

Product details

case study on financial data analysis

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Sage Choice
  • PMC11334375

Logo of sageopen

Methodologic and Data-Analysis Triangulation in Case Studies: A Scoping Review

Margarithe charlotte schlunegger.

1 Department of Health Professions, Applied Research & Development in Nursing, Bern University of Applied Sciences, Bern, Switzerland

2 Faculty of Health, School of Nursing Science, Witten/Herdecke University, Witten, Germany

Maya Zumstein-Shaha

Rebecca palm.

3 Department of Health Care Research, Carl von Ossietzky University Oldenburg, Oldenburg, Germany

Associated Data

Supplemental material, sj-docx-1-wjn-10.1177_01939459241263011 for Methodologic and Data-Analysis Triangulation in Case Studies: A Scoping Review by Margarithe Charlotte Schlunegger, Maya Zumstein-Shaha and Rebecca Palm in Western Journal of Nursing Research

We sought to explore the processes of methodologic and data-analysis triangulation in case studies using the example of research on nurse practitioners in primary health care.

Design and methods:

We conducted a scoping review within Arksey and O’Malley’s methodological framework, considering studies that defined a case study design and used 2 or more data sources, published in English or German before August 2023.

Data sources:

The databases searched were MEDLINE and CINAHL, supplemented with hand searching of relevant nursing journals. We also examined the reference list of all the included studies.

In total, 63 reports were assessed for eligibility. Ultimately, we included 8 articles. Five studies described within-method triangulation, whereas 3 provided information on between/across-method triangulation. No study reported within-method triangulation of 2 or more quantitative data-collection procedures. The data-collection procedures were interviews, observation, documentation/documents, service records, and questionnaires/assessments. The data-analysis triangulation involved various qualitative and quantitative methods of analysis. Details about comparing or contrasting results from different qualitative and mixed-methods data were lacking.

Conclusions:

Various processes for methodologic and data-analysis triangulation are described in this scoping review but lack detail, thus hampering standardization in case study research, potentially affecting research traceability. Triangulation is complicated by terminological confusion. To advance case study research in nursing, authors should reflect critically on the processes of triangulation and employ existing tools, like a protocol or mixed-methods matrix, for transparent reporting. The only existing reporting guideline should be complemented with directions on methodologic and data-analysis triangulation.

Case study research is defined as “an empirical method that investigates a contemporary phenomenon (the ‘case’) in depth and within its real-world context, especially when the boundaries between phenomenon and context may not be clearly evident. A case study relies on multiple sources of evidence, with data needing to converge in a triangulating fashion.” 1 (p15) This design is described as a stand-alone research approach equivalent to grounded theory and can entail single and multiple cases. 1 , 2 However, case study research should not be confused with single clinical case reports. “Case reports are familiar ways of sharing events of intervening with single patients with previously unreported features.” 3 (p107) As a methodology, case study research encompasses substantially more complexity than a typical clinical case report. 1 , 3

A particular characteristic of case study research is the use of various data sources, such as quantitative data originating from questionnaires as well as qualitative data emerging from interviews, observations, or documents. Therefore, a case study always draws on multiple sources of evidence, and the data must converge in a triangulating manner. 1 When using multiple data sources, a case or cases can be examined more convincingly and accurately, compensating for the weaknesses of the respective data sources. 1 Another characteristic is the interaction of various perspectives. This involves comparing or contrasting perspectives of people with different points of view, eg, patients, staff, or leaders. 4 Through triangulation, case studies contribute to the completeness of the research on complex topics, such as role implementation in clinical practice. 1 , 5 Triangulation involves a combination of researchers from various disciplines, of theories, of methods, and/or of data sources. By creating connections between these sources (ie, investigator, theories, methods, data sources, and/or data analysis), a new understanding of the phenomenon under study can be obtained. 6 , 7

This scoping review focuses on methodologic and data-analysis triangulation because concrete procedures are missing, eg, in reporting guidelines. Methodologic triangulation has been called methods, mixed methods, or multimethods. 6 It can encompass within-method triangulation and between/across-method triangulation. 7 “Researchers using within-method triangulation use at least 2 data-collection procedures from the same design approach.” 6 (p254) Within-method triangulation is either qualitative or quantitative but not both. Therefore, within-method triangulation can also be considered data source triangulation. 8 In contrast, “researchers using between/across-method triangulation employ both qualitative and quantitative data-collection methods in the same study.” 6 (p254) Hence, methodologic approaches are combined as well as various data sources. For this scoping review, the term “methodologic triangulation” is maintained to denote between/across-method triangulation. “Data-analysis triangulation is the combination of 2 or more methods of analyzing data.” 6 (p254)

Although much has been published on case studies, there is little consensus on the quality of the various data sources, the most appropriate methods, or the procedures for conducting methodologic and data-analysis triangulation. 5 According to the EQUATOR (Enhancing the QUAlity and Transparency Of health Research) clearinghouse for reporting guidelines, one standard exists for organizational case studies. 9 Organizational case studies provide insights into organizational change in health care services. 9 Rodgers et al 9 pointed out that, although high-quality studies are being funded and published, they are sometimes poorly articulated and methodologically inadequate. In the reporting checklist by Rodgers et al, 9 a description of the data collection is included, but reporting directions on methodologic and data-analysis triangulation are missing. Therefore, the purpose of this study was to examine the process of methodologic and data-analysis triangulation in case studies. Accordingly, we conducted a scoping review to elicit descriptions of and directions for triangulation methods and analysis, drawing on case studies of nurse practitioners (NPs) in primary health care as an example. Case studies are recommended to evaluate the implementation of new roles in (primary) health care, such as that of NPs. 1 , 5 Case studies on new role implementation can generate a unique and in-depth understanding of specific roles (individual), teams (smaller groups), family practices or similar institutions (organization), and social and political processes in health care systems. 1 , 10 The integration of NPs into health care systems is at different stages of progress around the world. 11 Therefore, studies are needed to evaluate this process.

The methodological framework by Arksey and O’Malley 12 guided this scoping review. We examined the current scientific literature on the use of methodologic and data-analysis triangulation in case studies on NPs in primary health care. The review process included the following stages: (1) establishing the research question; (2) identifying relevant studies; (3) selecting the studies for inclusion; (4) charting the data; (5) collating, summarizing, and reporting the results; and (6) consulting experts in the field. 12 Stage 6 was not performed due to a lack of financial resources. The reporting of the review followed the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Review) guideline by Tricco et al 13 (guidelines for reporting systematic reviews and meta-analyses [ Supplementary Table A ]). Scoping reviews are not eligible for registration in PROSPERO.

Stage 1: Establishing the Research Question

The aim of this scoping review was to examine the process of triangulating methods and analysis in case studies on NPs in primary health care to improve the reporting. We sought to answer the following question: How have methodologic and data-analysis triangulation been conducted in case studies on NPs in primary health care? To answer the research question, we examined the following elements of the selected studies: the research question, the study design, the case definition, the selected data sources, and the methodologic and data-analysis triangulation.

Stage 2: Identifying Relevant Studies

A systematic database search was performed in the MEDLINE (via PubMed) and CINAHL (via EBSCO) databases between July and September 2020 to identify relevant articles. The following terms were used as keyword search strategies: (“Advanced Practice Nursing” OR “nurse practitioners”) AND (“primary health care” OR “Primary Care Nursing”) AND (“case study” OR “case studies”). Searches were limited to English- and German-language articles. Hand searches were conducted in the journals Nursing Inquiry , BMJ Open , and BioMed Central ( BMC ). We also screened the reference lists of the studies included. The database search was updated in August 2023. The complete search strategy for all the databases is presented in Supplementary Table B .

Stage 3: Selecting the Studies

Inclusion and exclusion criteria.

We used the inclusion and exclusion criteria reported in Table 1 . We included studies of NPs who had at least a master’s degree in nursing according to the definition of the International Council of Nurses. 14 This scoping review considered studies that were conducted in primary health care practices in rural, urban, and suburban regions. We excluded reviews and study protocols in which no data collection had occurred. Articles were included without limitations on the time period or country of origin.

Inclusion and Exclusion Criteria.

CriteriaInclusionExclusion
Population- NPs with a master’s degree in nursing or higher - Nurses with a bachelor’s degree in nursing or lower
- Pre-registration nursing students
- No definition of master’s degree in nursing described in the publication
Interest- Description/definition of a case study design
- Two or more data sources
- Reviews
- Study protocols
- Summaries/comments/discussions
Context- Primary health care
- Family practices and home visits (including adult practices, internal medicine practices, community health centers)
- Nursing homes, hospital, hospice

Screening process

After the search, we collated and uploaded all the identified records into EndNote v.X8 (Clarivate Analytics, Philadelphia, Pennsylvania) and removed any duplicates. Two independent reviewers (MCS and SA) screened the titles and abstracts for assessment in line with the inclusion criteria. They retrieved and assessed the full texts of the selected studies while applying the inclusion criteria. Any disagreements about the eligibility of studies were resolved by discussion or, if no consensus could be reached, by involving experienced researchers (MZ-S and RP).

Stages 4 and 5: Charting the Data and Collating, Summarizing, and Reporting the Results

The first reviewer (MCS) extracted data from the selected publications. For this purpose, an extraction tool developed by the authors was used. This tool comprised the following criteria: author(s), year of publication, country, research question, design, case definition, data sources, and methodologic and data-analysis triangulation. First, we extracted and summarized information about the case study design. Second, we narratively summarized the way in which the data and methodological triangulation were described. Finally, we summarized the information on within-case or cross-case analysis. This process was performed using Microsoft Excel. One reviewer (MCS) extracted data, whereas another reviewer (SA) cross-checked the data extraction, making suggestions for additions or edits. Any disagreements between the reviewers were resolved through discussion.

A total of 149 records were identified in 2 databases. We removed 20 duplicates and screened 129 reports by title and abstract. A total of 46 reports were assessed for eligibility. Through hand searches, we identified 117 additional records. Of these, we excluded 98 reports after title and abstract screening. A total of 17 reports were assessed for eligibility. From the 2 databases and the hand search, 63 reports were assessed for eligibility. Ultimately, we included 8 articles for data extraction. No further articles were included after the reference list screening of the included studies. A PRISMA flow diagram of the study selection and inclusion process is presented in Figure 1 . As shown in Tables 2 and ​ and3, 3 , the articles included in this scoping review were published between 2010 and 2022 in Canada (n = 3), the United States (n = 2), Australia (n = 2), and Scotland (n = 1).

An external file that holds a picture, illustration, etc.
Object name is 10.1177_01939459241263011-fig1.jpg

PRISMA flow diagram.

Characteristics of Articles Included.

AuthorContandriopoulos et al Flinter Hogan et al Hungerford et al O’Rourke Roots and MacDonald Schadewaldt et al Strachan et al
CountryCanadaThe United StatesThe United StatesAustraliaCanadaCanadaAustraliaScotland
How or why research questionNo information on the research questionSeveral how or why research questionsWhat and how research questionNo information on the research questionSeveral how or why research questionsNo information on the research questionWhat research questionWhat and why research questions
Design and referenced author of methodological guidanceSix qualitative case studies
Robert K. Yin
Multiple-case studies design
Robert K. Yin
Multiple-case studies design
Robert E. Stake
Case study design
Robert K. Yin
Qualitative single-case study
Robert K. Yin
Robert E. Stake
Sharan Merriam
Single-case study design
Robert K. Yin
Sharan Merriam
Multiple-case studies design
Robert K. Yin
Robert E. Stake
Multiple-case studies design
Case definitionTeam of health professionals
(Small group)
Nurse practitioners
(Individuals)
Primary care practices (Organization)Community-based NP model of practice
(Organization)
NP-led practice
(Organization)
Primary care practices
(Organization)
No information on case definitionHealth board (Organization)

Overview of Within-Method, Between/Across-Method, and Data-Analysis Triangulation.

AuthorContandriopoulos et al Flinter Hogan et al Hungerford et al O’Rourke Roots and MacDonald Schadewaldt et al Strachan et al
Within-method triangulation (using within-method triangulation use at least 2 data-collection procedures from the same design approach)
:
 InterviewsXxxxx
 Observationsxx
 Public documentsxxx
 Electronic health recordsx
Between/across-method (using both qualitative and quantitative data-collection procedures in the same study)
:
:
 Interviewsxxx
 Observationsxx
 Public documentsxx
 Electronic health recordsx
:
 Self-assessmentx
 Service recordsx
 Questionnairesx
Data-analysis triangulation (combination of 2 or more methods of analyzing data)
:
:
 Deductivexxx
 Inductivexx
 Thematicxx
 Content
:
 Descriptive analysisxxx
:
:
 Deductivexxxx
 Inductivexx
 Thematicx
 Contentx

Research Question, Case Definition, and Case Study Design

The following sections describe the research question, case definition, and case study design. Case studies are most appropriate when asking “how” or “why” questions. 1 According to Yin, 1 how and why questions are explanatory and lead to the use of case studies, histories, and experiments as the preferred research methods. In 1 study from Canada, eg, the following research question was presented: “How and why did stakeholders participate in the system change process that led to the introduction of the first nurse practitioner-led Clinic in Ontario?” (p7) 19 Once the research question has been formulated, the case should be defined and, subsequently, the case study design chosen. 1 In typical case studies with mixed methods, the 2 types of data are gathered concurrently in a convergent design and the results merged to examine a case and/or compare multiple cases. 10

Research question

“How” or “why” questions were found in 4 studies. 16 , 17 , 19 , 22 Two studies additionally asked “what” questions. Three studies described an exploratory approach, and 1 study presented an explanatory approach. Of these 4 studies, 3 studies chose a qualitative approach 17 , 19 , 22 and 1 opted for mixed methods with a convergent design. 16

In the remaining studies, either the research questions were not clearly stated or no “how” or “why” questions were formulated. For example, “what” questions were found in 1 study. 21 No information was provided on exploratory, descriptive, and explanatory approaches. Schadewaldt et al 21 chose mixed methods with a convergent design.

Case definition and case study design

A total of 5 studies defined the case as an organizational unit. 17 , 18 - 20 , 22 Of the 8 articles, 4 reported multiple-case studies. 16 , 17 , 22 , 23 Another 2 publications involved single-case studies. 19 , 20 Moreover, 2 publications did not state the case study design explicitly.

Within-Method Triangulation

This section describes within-method triangulation, which involves employing at least 2 data-collection procedures within the same design approach. 6 , 7 This can also be called data source triangulation. 8 Next, we present the single data-collection procedures in detail. In 5 studies, information on within-method triangulation was found. 15 , 17 - 19 , 22 Studies describing a quantitative approach and the triangulation of 2 or more quantitative data-collection procedures could not be included in this scoping review.

Qualitative approach

Five studies used qualitative data-collection procedures. Two studies combined face-to-face interviews and documents. 15 , 19 One study mixed in-depth interviews with observations, 18 and 1 study combined face-to-face interviews and documentation. 22 One study contained face-to-face interviews, observations, and documentation. 17 The combination of different qualitative data-collection procedures was used to present the case context in an authentic and complex way, to elicit the perspectives of the participants, and to obtain a holistic description and explanation of the cases under study.

All 5 studies used qualitative interviews as the primary data-collection procedure. 15 , 17 - 19 , 22 Face-to-face, in-depth, and semi-structured interviews were conducted. The topics covered in the interviews included processes in the introduction of new care services and experiences of barriers and facilitators to collaborative work in general practices. Two studies did not specify the type of interviews conducted and did not report sample questions. 15 , 18

Observations

In 2 studies, qualitative observations were carried out. 17 , 18 During the observations, the physical design of the clinical patients’ rooms and office spaces was examined. 17 Hungerford et al 18 did not explain what information was collected during the observations. In both studies, the type of observation was not specified. Observations were generally recorded as field notes.

Public documents

In 3 studies, various qualitative public documents were studied. 15 , 19 , 22 These documents included role description, education curriculum, governance frameworks, websites, and newspapers with information about the implementation of the role and general practice. Only 1 study failed to specify the type of document and the collected data. 15

Electronic health records

In 1 study, qualitative documentation was investigated. 17 This included a review of dashboards (eg, provider productivity reports or provider quality dashboards in the electronic health record) and quality performance reports (eg, practice-wide or co-management team-wide performance reports).

Between/Across-Method Triangulation

This section describes the between/across methods, which involve employing both qualitative and quantitative data-collection procedures in the same study. 6 , 7 This procedure can also be denoted “methodologic triangulation.” 8 Subsequently, we present the individual data-collection procedures. In 3 studies, information on between/across triangulation was found. 16 , 20 , 21

Mixed methods

Three studies used qualitative and quantitative data-collection procedures. One study combined face-to-face interviews, documentation, and self-assessments. 16 One study employed semi-structured interviews, direct observation, documents, and service records, 20 and another study combined face-to-face interviews, non-participant observation, documents, and questionnaires. 23

All 3 studies used qualitative interviews as the primary data-collection procedure. 16 , 20 , 23 Face-to-face and semi-structured interviews were conducted. In the interviews, data were collected on the introduction of new care services and experiences of barriers to and facilitators of collaborative work in general practices.

Observation

In 2 studies, direct and non-participant qualitative observations were conducted. 20 , 23 During the observations, the interaction between health professionals or the organization and the clinical context was observed. Observations were generally recorded as field notes.

In 2 studies, various qualitative public documents were examined. 20 , 23 These documents included role description, newspapers, websites, and practice documents (eg, flyers). In the documents, information on the role implementation and role description of NPs was collected.

Individual journals

In 1 study, qualitative individual journals were studied. 16 These included reflective journals from NPs, who performed the role in primary health care.

Service records

Only 1 study involved quantitative service records. 20 These service records were obtained from the primary care practices and the respective health authorities. They were collected before and after the implementation of an NP role to identify changes in patients’ access to health care, the volume of patients served, and patients’ use of acute care services.

Questionnaires/Assessment

In 2 studies, quantitative questionnaires were used to gather information about the teams’ satisfaction with collaboration. 16 , 21 In 1 study, 3 validated scales were used. The scales measured experience, satisfaction, and belief in the benefits of collaboration. 21 Psychometric performance indicators of these scales were provided. However, the time points of data collection were not specified; similarly, whether the questionnaires were completed online or by hand was not mentioned. A competency self-assessment tool was used in another study. 16 The assessment comprised 70 items and included topics such as health promotion, protection, disease prevention and treatment, the NP-patient relationship, the teaching-coaching function, the professional role, managing and negotiating health care delivery systems, monitoring and ensuring the quality of health care practice, and cultural competence. Psychometric performance indicators were provided. The assessment was completed online with 2 measurement time points (pre self-assessment and post self-assessment).

Data-Analysis Triangulation

This section describes data-analysis triangulation, which involves the combination of 2 or more methods of analyzing data. 6 Subsequently, we present within-case analysis and cross-case analysis.

Mixed-methods analysis

Three studies combined qualitative and quantitative methods of analysis. 16 , 20 , 21 Two studies involved deductive and inductive qualitative analysis, and qualitative data were analyzed thematically. 20 , 21 One used deductive qualitative analysis. 16 The method of analysis was not specified in the studies. Quantitative data were analyzed using descriptive statistics in 3 studies. 16 , 20 , 23 The descriptive statistics comprised the calculation of the mean, median, and frequencies.

Qualitative methods of analysis

Two studies combined deductive and inductive qualitative analysis, 19 , 22 and 2 studies only used deductive qualitative analysis. 15 , 18 Qualitative data were analyzed thematically in 1 study, 22 and data were treated with content analysis in the other. 19 The method of analysis was not specified in the 2 studies.

Within-case analysis

In 7 studies, a within-case analysis was performed. 15 - 20 , 22 Six studies used qualitative data for the within-case analysis, and 1 study employed qualitative and quantitative data. Data were analyzed separately, consecutively, or in parallel. The themes generated from qualitative data were compared and then summarized. The individual cases were presented mostly as a narrative description. Quantitative data were integrated into the qualitative description with tables and graphs. Qualitative and quantitative data were also presented as a narrative description.

Cross-case analyses

Of the multiple-case studies, 5 carried out cross-case analyses. 15 - 17 , 20 , 22 Three studies described the cross-case analysis using qualitative data. Two studies reported a combination of qualitative and quantitative data for the cross-case analysis. In each multiple-case study, the individual cases were contrasted to identify the differences and similarities between the cases. One study did not specify whether a within-case or a cross-case analysis was conducted. 23

Confirmation or contradiction of data

This section describes confirmation or contradiction through qualitative and quantitative data. 1 , 4 Qualitative and quantitative data were reported separately, with little connection between them. As a result, the conclusions on neither the comparisons nor the contradictions could be clearly determined.

Confirmation or contradiction among qualitative data

In 3 studies, the consistency of the results of different types of qualitative data was highlighted. 16 , 19 , 21 In particular, documentation and interviews or interviews and observations were contrasted:

  • Confirmation between interviews and documentation: The data from these sources corroborated the existence of a common vision for an NP-led clinic. 19
  • Confirmation among interviews and observation: NPs experienced pressure to find and maintain their position within the existing system. Nurse practitioners and general practitioners performed complete episodes of care, each without collaborative interaction. 21
  • Contradiction among interviews and documentation: For example, interviewees mentioned that differentiating the scope of practice between NPs and physicians is difficult as there are too many areas of overlap. However, a clear description of the scope of practice for the 2 roles was provided. 21

Confirmation through a combination of qualitative and quantitative data

Both types of data showed that NPs and general practitioners wanted to have more time in common to discuss patient cases and engage in personal exchanges. 21 In addition, the qualitative and quantitative data confirmed the individual progression of NPs from less competent to more competent. 16 One study pointed out that qualitative and quantitative data obtained similar results for the cases. 20 For example, integrating NPs improved patient access by increasing appointment availability.

Contradiction through a combination of qualitative and quantitative data

Although questionnaire results indicated that NPs and general practitioners experienced high levels of collaboration and satisfaction with the collaborative relationship, the qualitative results drew a more ambivalent picture of NPs’ and general practitioners’ experiences with collaboration. 21

Research Question and Design

The studies included in this scoping review evidenced various research questions. The recommended formats (ie, how or why questions) were not applied consistently. Therefore, no case study design should be applied because the research question is the major guide for determining the research design. 2 Furthermore, case definitions and designs were applied variably. The lack of standardization is reflected in differences in the reporting of these case studies. Generally, case study research is viewed as allowing much more freedom and flexibility. 5 , 24 However, this flexibility and the lack of uniform specifications lead to confusion.

Methodologic Triangulation

Methodologic triangulation, as described in the literature, can be somewhat confusing as it can refer to either data-collection methods or research designs. 6 , 8 For example, methodologic triangulation can allude to qualitative and quantitative methods, indicating a paradigmatic connection. Methodologic triangulation can also point to qualitative and quantitative data-collection methods, analysis, and interpretation without specific philosophical stances. 6 , 8 Regarding “data-collection methods with no philosophical stances,” we would recommend using the wording “data source triangulation” instead. Thus, the demarcation between the method and the data-collection procedures will be clearer.

Within-Method and Between/Across-Method Triangulation

Yin 1 advocated the use of multiple sources of evidence so that a case or cases can be investigated more comprehensively and accurately. Most studies included multiple data-collection procedures. Five studies employed a variety of qualitative data-collection procedures, and 3 studies used qualitative and quantitative data-collection procedures (mixed methods). In contrast, no study contained 2 or more quantitative data-collection procedures. In particular, quantitative data-collection procedures—such as validated, reliable questionnaires, scales, or assessments—were not used exhaustively. The prerequisites for using multiple data-collection procedures are availability, the knowledge and skill of the researcher, and sufficient financial funds. 1 To meet these prerequisites, research teams consisting of members with different levels of training and experience are necessary. Multidisciplinary research teams need to be aware of the strengths and weaknesses of different data sources and collection procedures. 1

Qualitative methods of analysis and results

When using multiple data sources and analysis methods, it is necessary to present the results in a coherent manner. Although the importance of multiple data sources and analysis has been emphasized, 1 , 5 the description of triangulation has tended to be brief. Thus, traceability of the research process is not always ensured. The sparse description of the data-analysis triangulation procedure may be due to the limited number of words in publications or the complexity involved in merging the different data sources.

Only a few concrete recommendations regarding the operationalization of the data-analysis triangulation with the qualitative data process were found. 25 A total of 3 approaches have been proposed 25 : (1) the intuitive approach, in which researchers intuitively connect information from different data sources; (2) the procedural approach, in which each comparative or contrasting step in triangulation is documented to ensure transparency and replicability; and (3) the intersubjective approach, which necessitates a group of researchers agreeing on the steps in the triangulation process. For each case study, one of these 3 approaches needs to be selected, carefully carried out, and documented. Thus, in-depth examination of the data can take place. Farmer et al 25 concluded that most researchers take the intuitive approach; therefore, triangulation is not clearly articulated. This trend is also evident in our scoping review.

Mixed-methods analysis and results

Few studies in this scoping review used a combination of qualitative and quantitative analysis. However, creating a comprehensive stand-alone picture of a case from both qualitative and quantitative methods is challenging. Findings derived from different data types may not automatically coalesce into a coherent whole. 4 O’Cathain et al 26 described 3 techniques for combining the results of qualitative and quantitative methods: (1) developing a triangulation protocol; (2) following a thread by selecting a theme from 1 component and following it across the other components; and (3) developing a mixed-methods matrix.

The most detailed description of the conducting of triangulation is the triangulation protocol. The triangulation protocol takes place at the interpretation stage of the research process. 26 This protocol was developed for multiple qualitative data but can also be applied to a combination of qualitative and quantitative data. 25 , 26 It is possible to determine agreement, partial agreement, “silence,” or dissonance between the results of qualitative and quantitative data. The protocol is intended to bring together the various themes from the qualitative and quantitative results and identify overarching meta-themes. 25 , 26

The “following a thread” technique is used in the analysis stage of the research process. To begin, each data source is analyzed to identify the most important themes that need further investigation. Subsequently, the research team selects 1 theme from 1 data source and follows it up in the other data source, thereby creating a thread. The individual steps of this technique are not specified. 26 , 27

A mixed-methods matrix is used at the end of the analysis. 26 All the data collected on a defined case are examined together in 1 large matrix, paying attention to cases rather than variables or themes. In a mixed-methods matrix (eg, a table), the rows represent the cases for which both qualitative and quantitative data exist. The columns show the findings for each case. This technique allows the research team to look for congruency, surprises, and paradoxes among the findings as well as patterns across multiple cases. In our review, we identified only one of these 3 approaches in the study by Roots and MacDonald. 20 These authors mentioned that a causal network analysis was performed using a matrix. However, no further details were given, and reference was made to a later publication. We could not find this publication.

Case Studies in Nursing Research and Recommendations

Because it focused on the implementation of NPs in primary health care, the setting of this scoping review was narrow. However, triangulation is essential for research in this area. This type of research was found to provide a good basis for understanding methodologic and data-analysis triangulation. Despite the lack of traceability in the description of the data and methodological triangulation, we believe that case studies are an appropriate design for exploring new nursing roles in existing health care systems. This is evidenced by the fact that case study research is widely used in many social science disciplines as well as in professional practice. 1 To strengthen this research method and increase the traceability in the research process, we recommend using the reporting guideline and reporting checklist by Rodgers et al. 9 This reporting checklist needs to be complemented with methodologic and data-analysis triangulation. A procedural approach needs to be followed in which each comparative step of the triangulation is documented. 25 A triangulation protocol or a mixed-methods matrix can be used for this purpose. 26 If there is a word limit in a publication, the triangulation protocol or mixed-methods matrix needs to be identified. A schematic representation of methodologic and data-analysis triangulation in case studies can be found in Figure 2 .

An external file that holds a picture, illustration, etc.
Object name is 10.1177_01939459241263011-fig2.jpg

Schematic representation of methodologic and data-analysis triangulation in case studies (own work).

Limitations

This study suffered from several limitations that must be acknowledged. Given the nature of scoping reviews, we did not analyze the evidence reported in the studies. However, 2 reviewers independently reviewed all the full-text reports with respect to the inclusion criteria. The focus on the primary care setting with NPs (master’s degree) was very narrow, and only a few studies qualified. Thus, possible important methodological aspects that would have contributed to answering the questions were omitted. Studies describing the triangulation of 2 or more quantitative data-collection procedures could not be included in this scoping review due to the inclusion and exclusion criteria.

Conclusions

Given the various processes described for methodologic and data-analysis triangulation, we can conclude that triangulation in case studies is poorly standardized. Consequently, the traceability of the research process is not always given. Triangulation is complicated by the confusion of terminology. To advance case study research in nursing, we encourage authors to reflect critically on methodologic and data-analysis triangulation and use existing tools, such as the triangulation protocol or mixed-methods matrix and the reporting guideline checklist by Rodgers et al, 9 to ensure more transparent reporting.

Supplemental Material

Acknowledgments.

The authors thank Simona Aeschlimann for her support during the screening process.

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: The author(s) received no financial support for the research, authorship, and/or publication of this article.

An external file that holds a picture, illustration, etc.
Object name is 10.1177_01939459241263011-img1.jpg

Supplemental Material: Supplemental material for this article is available online.

  • Open access
  • Published: 31 August 2024

Spatial analysis of the impact of urban built environment on cardiovascular diseases: a case study in Xixiangtang, China

  • Shuguang Deng 1 ,
  • Jinlong Liang 1 ,
  • Ying Peng 2 ,
  • Wei Liu 3 ,
  • Jinhong Su 1 &
  • Shuyan Zhu 1  

BMC Public Health volume  24 , Article number:  2368 ( 2024 ) Cite this article

Metrics details

The built environment, as a critical factor influencing residents' cardiovascular health, has a significant potential impact on the incidence of cardiovascular diseases (CVDs).

Taking Xixiangtang District in Nanning City, Guangxi Zhuang Autonomous Region of China as a case study, we utilized the geographic location information of CVD patients, detailed road network data, and urban points of interest (POI) data. Kernel density estimation (KDE) and spatial autocorrelation analysis were specifically employed to identify the spatial distribution patterns, spatial clustering, and spatial correlations of built environment elements and diseases. The GeoDetector method (GDM) was used to assess the impact of environmental factors on diseases, and geographically weighted regression (GWR) analysis was adopted to reveal the spatial heterogeneity effect of environmental factors on CVD risk.

The results indicate that the built environment elements and CVDs samples exhibit significant clustering characteristics in their spatial distribution, with a positive correlation between the distribution density of environmental elements and the incidence of CVDs (Moran’s I > 0, p  < 0.01). Further factor detection revealed that the distribution of healthcare facilities had the most significant impact on CVDs ( q  = 0.532, p  < 0.01), followed by shopping and consumption ( q  = 0.493, p  < 0.01), dining ( q  = 0.433, p  < 0.01), and transportation facilities ( q  = 0.423, p  < 0.01), while the impact of parks and squares ( q  = 0.174, p  < 0.01) and road networks ( q  = 0.159, p  < 0.01) was relatively smaller. Additionally, the interaction between different built environment elements exhibited a bi-factor enhancement effect on CVDs. In the local analysis, the spatial heterogeneity of different built environment elements on CVDs further revealed the regional differences and complexities.

Conclusions

The spatial distribution of built environment elements is significantly correlated with CVDs to varying degrees and impacts differently across regions, underscoring the importance of the built environment on cardiovascular health. When planning and improving urban environments, elements and areas that have a more significant impact on CVDs should be given priority consideration.

Peer Review reports

Cardiovascular diseases (CVDs) have become one of the most common lethal diseases worldwide, with both the number of affected individuals and the mortality rate continuously rising over the past two decades. Statistical data reveal that from 1990 to 2019, the number of individuals with CVDs globally increased from 271 to 523 million, while deaths climbed from 12.1 million to 18.6 million, accounting for approximately one-third of the total annual global deaths [ 1 ]. The severity of CVDs poses not only a global health challenge but also exerts immense pressure on the healthcare system and the economy [ 2 ]. According to the World Heart Federation, global medical costs for CVDs are projected to rise from approximately 863 billion US dollars in 2010 to 1044 billion US dollars by 2030 [ 3 ]. Thus, it is particularly important to deeply explore the mechanisms that influence CVDs and to develop effective and sustainable strategies to reduce risk and prevent these diseases.

The urban built environment refers to the comprehensive physical structure and man-made surroundings of an urban area, including buildings, transportation systems, infrastructure, land use planning, and elements of natural and artificial spaces [ 4 ]. Numerous studies have focused on the close connection between the built environment and human health, particularly with respect to cardiovascular health. Research indicates that the impact of the built environment on cardiovascular health is a process network structure with various influencing factors, including but not limited to factors contributing to CVDs such as obesity, diabetes, high blood pressure [ 5 , 6 , 7 , 8 , 9 , 10 ], environmental issues like traffic noise and air pollution [ 11 , 12 ], as well as aspects of physical exercise, psychological stress, and lifestyle [ 13 , 14 , 15 , 16 , 17 ], all of which collectively affect the pathogenesis of CVDs [ 18 , 19 , 20 ]. Studies show that optimizing urban design, such as rational land allocation and planning street layouts, can guide people to access more life services, cultivate proactive attitudes and healthy bodies, thereby reducing the risk of CVDs [ 21 , 22 ]. Urban spatially compact development models can encourage physical activity, reducing the risk of cardiovascular and metabolic issues [ 23 ]. In contrast, long commutes and high traffic density may lead to chronic stress and lack of exercise, increasing the risk of obesity and hypertension. Conversely, appropriate intersection density, land-use diversity, destination convenience, and accessibility might encourage walking, improve health, and reduce the risk of obesity, diabetes, hypertension, and dyslipidemia, which are cardiovascular-related problems [ 24 , 25 , 26 ]. The density and accessibility of supermarkets have a direct impact on the dietary habits of community residents, wherein excessive density may increase the risk of obesity and diabetes and correlate with blood pressure levels [ 27 ]. Urban green spaces and outdoor recreational areas have a positive effect on cardiovascular health; green spaces not only offer places for exercise and relaxation but also help alleviate stress, improve mental states, and enhance air quality, thus mitigating the harm caused by air pollution and protecting cardiac and vascular health [ 28 ]. Research also indicates that individuals residing in areas with high greenery rates are more likely to enjoy opportunities that promote physical activity, mental health, and healthy lifestyles, thereby minimizing CVD risks [ 29 , 30 ]. In summary, scientific and rational urban planning, such as diversified land use, appropriate building density, good street connectivity, convenient destinations, short-distance commuting, and beautiful environments, are key factors in promoting overall health and preventing CVDs.

Although numerous studies have focused on exploring the relationship between the built environment and CVDs, the specific mechanisms underlying this relationship remain unclear. This knowledge gap is mainly due to the complexity of the built environment itself and the multifactorial pathogenesis of CVDs. Current research mostly concentrates on individual aspects of the built environment, such as noise, air pollution, green spaces, and transportation [ 31 ], lacking consideration for the overall complexity of the built environment. Many elements of the built environment are interactive; for instance, pedestrian-friendly urban design may enhance physical activity and social interaction, yet it could also be counteracted by air and noise pollution caused by urban traffic [ 32 ]. Therefore, the same element of the built environment might have different effects in different contexts, adding complexity to the study of the built environment. Furthermore, while existing research has exhibited considerable depth and breadth in exploring the complex and dynamic relationship between the built environment and CVDs, many areas still require further improvement and deepening. Traditional linear correlation analyses, such as OLS and logistic regression models, have been widely used to assess the significance level between built environment characteristics and CVDs mortality rates, and to investigate factors such as intersection density, slope, greening, and commercial density [ 33 , 34 ]. However, these methods fall short in addressing the complexity and non-linear characteristics of spatial data.

Therefore, from a geographical perspective, it is particularly important to adopt more appropriate methods to capture the non-stationarity and heterogeneity of spatial data and to explore the spatial correlation characteristics between the built environment and CVDs. However, current research utilizing spatial models has mainly focused on macro-level perspectives, such as national or provincial levels. For example, ŞENER et al. employed spatial autocorrelation models and hot spot analysis models to assess the spatiotemporal variation characteristics of CVD mortality across multiple provincial administrative regions [ 35 ]. Baptista et al. analyzed the impact of factors such as per capita GDP, urbanization rate, education, and cigarette consumption on the growth trends of CVD incidence using spatial lag and spatial error models across different countries or regions [ 36 ]. Eun et al. used Bayesian spatial multilevel models to measure built environment variables in 546 administrative districts of Gyeonggi Province, South Korea, and evaluated the impact of the built environment on CVDs [ 37 ]. While these studies have, to some extent, revealed the spatial distribution characteristics of CVDs and their spatial relationships with environmental features, the scope of these studies is often large, and they tend to overlook the heterogeneity at the micro-level within cities and its specific impact on residents' health. As a result, it is challenging to accurately capture the differential effects of the built environment on CVD incidence across different areas within a city, and many critical environmental factors at the micro-geographical scale, which are directly related to the daily lives and health of residents, may be obscured.

Given this, we focus on Xixiangtang District in Nanning City, China, and construct a research framework centered on multi-source data, including the distribution of CVDs, road networks, and urban POI data. By employing KDE to reveal hotspot areas, spatial autocorrelation analysis to explore spatial dependence, the GDM to dissect key factors, and GWR to capture the spatial heterogeneity effects, we deeply analyze the complex mechanisms by which the urban built environment influences the incidence of CVDs. Our study aims to answer: Is there a significant spatial association between urban built environment elements and the incidence rate of CVDs? To what extent do different built environment elements impact CVDs? And, what are the regional differences in the impact of built environment elements on CVDs in different areas?

This study focuses on Xixiangtang District in Nanning City (Fig.  1 ), an important administrative district located in the northwest of Nanning City, covering an area of approximately 1,276 square kilometers with a permanent population of over one million. As an exemplary early-developed area of Nanning City, the built environment of Xixiangtang not only carries a rich historical and cultural heritage but also witnesses the transformation from a traditional old town to a modern emerging area, forming a unique urban–rural transitional zone. However, with the acceleration of urbanization, Xixiangtang District also faces numerous environmental challenges, such as declining air quality, congested traffic networks, increasing noise pollution, and continuously rising population density, all of which may pose potential threats to residents' cardiovascular health. Therefore, choosing the built environment of Xixiangtang as the core area of this study is not only due to its representativeness but also because the issues faced by this area are of profound practical significance for exploring the health impacts of urbanization and formulating effective environmental improvement strategies.

figure 1

Location of study area

The CVD case data is sourced from the cardiovascular department's medical records at Guangxi National Hospital. Located in the southeastern core area of Xixiangtang District, near metro stations and densely populated areas, the hospital's superior geographical location and convenient transportation conditions greatly facilitate patient visits, especially for those seeking high-level cardiovascular medical services. Although spatial distance is an important consideration for patients when choosing a medical facility, our study on the spatial distribution patterns of CVDs also takes into account various influencing factors, including socioeconomic status, environmental factors, patient health conditions, and healthcare-seeking behaviors, ensuring the depth and accuracy of the results. Additionally, Guangxi National Hospital is one of the few top-tier (tertiary A) comprehensive hospitals in Xixiangtang District, with its cardiovascular department being a key specialty. The department's outstanding reputation and wide influence, combined with its advantages in equipment, technology, and healthcare costs compared to other non-specialized cardiovascular departments in the region, make it particularly attractive to patients in Xixiangtang, thus rendering the data relatively representative. To ensure the fairness of our study results, we have implemented multiple verification measures, including comprehensive data collection, independent evaluation of medical standards, rigorous statistical analysis, and consideration of healthcare costs.

With authorization from Guangxi National Hospital, we obtained and analyzed the cardiovascular department's data records. Our study adheres to ethical principles and does not involve any operations that have a substantial impact on patients. The cardiovascular data records include basic patient information (such as age, gender, address, etc.), diagnostic information (disease type, diagnosis date, etc.), and treatment records. We focused on CVD patients diagnosed between January 1, 2020, and December 31, 2022. Through systematic screening and organization, we constructed a database of CVD patients during this period. During the data processing procedure, we implemented a rigorous data cleaning process, identifying and excluding incomplete, duplicate, or abnormal data records. This included checking for missing data, logical errors (such as extremely large or small ages), and consistency in diagnostic codes, ensuring the quality and reliability of the data. After data cleaning, we selected 3,472 valid samples, which are representative in terms of disease types, patient characteristics, and geographic distribution. Considering the study involves geographic location analysis, we used a text-to-coordinate tool developed based on the Amap (Gaode) API to convert patient address information into precise geographic coordinates. Finally, using ArcGIS 10.8 software, we visualized the processed case data on a map.

As a multidimensional and comprehensive conceptual framework, the built environment encompasses a vast and intricate system of elements. Given the accessibility, completeness of data, and the robust foundation in current research domains, we have centered our in-depth analysis on two core components: the urban road system and urban POIs. Road data is primarily sourced from OpenStreetMap (OSM) and processed using ArcGIS 10.8 to filter and handle incomplete records. We ultimately selected five types of roads for analysis: highways, expressways, arterial roads, secondary roads, and local roads [ 38 ]. Urban POI data was selected based on existing research and obtained through Amap. Amap is a leading map service provider in China, known for its vast user data, precise geocoding system, and advanced intelligent analysis technology, which accurately captures and presents the spatial distribution and attribute characteristics of various urban facilities. We used Amap's API interface and offline map data package to obtain the coordinates and basic attributes of POIs in the study area, including six key environment elements: dining [ 39 ], parks [ 40 ], transportation [ 20 ], shopping [ 41 ], sports [ 42 ], and healthcare [ 43 ] (Table  1 ). These elements significantly reflect the distribution status of the urban built environment. This comprehensive and detailed data provides a solid foundation for further exploring the relationship between the built environment and cardiovascular health.

  • Spatial analysis

Based on existing research findings, we have identified key built environment factors that influence the occurrence of cardiovascular diseases (CVDs) and meticulously processed the data sourced from [ 34 , 35 , 44 ]. The preprocessed data was then subjected to spatial analysis utilizing software tools such as ArcGIS 10.8, Geoda, and the Geographic Detector. Through various methods including KDE, spatial autocorrelation analysis (encompassing both univariate and bivariate analyses), factor detection and interaction detection using the Geographic Detector, as well as GWR, we aimed to explore the potential links between the urban built environment and CVDs (Fig.  2 ).

figure 2

Research framework

Kernel Density Estimation (KDE)

Before delving into the complex relationship between the built environment and CVDs, it is crucial to accurately depict the spatial distribution of these key elements within the study area. Given this need, KDE, an advanced non-parametric statistical technique, was introduced as our core analytical tool. KDE is a non-parametric method used to estimate the probability density function of a random variable, and we implemented it using ArcGIS 10.8 software. Compared to other density estimation methods, such as simple counting or histograms, KDE more accurately reflects the true distribution of spatial elements, helping us identify hotspots and cold spots in the city with greater precision. The core of this method lies in assigning a smooth kernel function to each observation point, which describes the influence range of the observation point on its surrounding space, known as bandwidth. The density distribution map of the entire area is then obtained by overlaying the kernel functions of all observation point [ 45 , 46 , 47 ]. In parameter settings, we set the cell size to 100 m, based on a comprehensive consideration of the study area's scope, the distribution characteristics of geographic phenomena, and computational resource limitations. This aimed to maintain sufficient precision while avoiding excessive computational burden and amplification of data noise. To further refine the analysis and visually present the continuous spatial distribution of CVDs, we used the natural breaks method to classify the KDE results into five levels. KDE visually displays the continuous spatial distribution of CVDs, identifying high-risk and low-risk areas, and provides foundational data support for subsequent spatial analyses.

Spatial autocorrelation analysis

Spatial autocorrelation analysis is a statistical method used to assess the similarity or correlation between observed values in geographic space. We derived the point attribute values from the kernel density transformation and conducted univariate global spatial autocorrelation analysis, as well as bivariate global spatial autocorrelation analysis between built environment factors and CVDs using Geoda software. Univariate global spatial autocorrelation analysis was used to study the spatial distribution characteristics of the overall dataset, using Moran's I to evaluate whether the dataset exhibits spatial autocorrelation, indicating clustering or dispersion trends [ 48 , 49 ]. Bivariate global spatial autocorrelation further analyzed the spatial correlation between different indicators [ 50 , 51 ]. Spatial autocorrelation analysis helps verify whether the spatial clustering in KDE results is significant and preliminarily explores whether there is spatial interdependence between environmental factors and CVDs.

The results of spatial autocorrelation analysis include the Moran's I index, which directly reflects the strength and direction of spatial autocorrelation, as well as key indicators such as p values and Z values, together constructing a comprehensive quantitative system for evaluating spatial autocorrelation. In the results of spatial autocorrelation analysis, when the p -value is less than 0.01, the confidence level reaches 99%, and the Z value is greater than 2.58, the null hypothesis can be rejected, indicating that the research results are highly reliable. The degree of spatial clustering of variables is measured by Moran's I. The range of Moran's I is [-1, 1]; if Moran's I > 0, it indicates positive correlation, with higher values indicating stronger clustering; if Moran's I < 0, it indicates negative correlation, with lower values indicating stronger clustering; and if Moran's I = 0, the variables are not clustered and show a dispersed distribution, with the correlation weakening as the value approaches 0 [ 52 ].

The GeoDetector method (GDM)

We analyzed the processed kernel density attribute data using the GDM to parse the influence of the built environment on CVDs and uncover the underlying driving factors. The geographic detector tool was developed by a team led by Researcher Jinfeng Wang at the Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Sciences [ 53 ]. The GDM mainly includes factor detection, interaction detection, risk area detection, and ecological detection, and it has been widely applied in multiple fields. We used the factor detection function to evaluate the impact of environmental factors on the distribution of CVDs and utilized the interaction detection function to analyze the interaction between different environmental factors [ 54 , 55 ]. The purpose of the factor detector is to detect the extent to which independent variables explain the spatial differentiation of the dependent variable. It quantifies the influence of independent variables on the spatial distribution of the dependent variable to reveal which factors are the main contributors to the spatial distribution differences of the dependent variable. However, the impact of built environment elements on CVDs may not be determined by a single factor but rather by the synergistic effect of multiple built environment factors. Therefore, through the means of interaction detection, we further analyzed the synergistic impact of pairs of built environment elements on the spatial distribution of CVDs.

In this analysis, the q value was used as a quantitative indicator of the influence of environmental factors on CVDs, with values ranging between [0,1]. A higher q value indicates a more significant influence of the environmental factor, whereas a lower q value indicates a smaller influence. Additionally, a significance level of p  < 0.01 further emphasizes the reliability of these factors' significant impact on the distribution of CVD samples.

Geographically Weighted Regression (GWR)

However, while the GDM can reveal the overall impact of built environment elements on CVDs, its limitation lies in its difficulty to finely characterize the specific differences and dynamic changes of these impacts within different geographic spatial units. To address this shortcoming, we introduced the GWR model through the spatial analysis tools of ArcGIS 10.8 software for local analysis. This model dynamically maps the distribution and variation trajectory of regression coefficients in geographic space, incorporating the key variable of spatial location into the regression analysis. In this way, the GWR model can reveal the spatial heterogeneity of parameters at different geographic locations, accurately capturing the relationships between local variables, thus overcoming the limitations of traditional global regression models in handling spatial non-stationarity [ 56 , 57 ]. Compared to traditional global regression models, the GWR model excels in reducing model residuals and improving fitting accuracy.

When interpreting the results of the GWR model, it is necessary to consider the regression coefficients, R 2 (coefficient of determination), and adjusted R 2 comprehensively. The dynamic changes in regression coefficients in space reveal the complex relationships between independent and dependent variables at different geographic locations, with their sign and magnitude directly reflecting the nature and intensity of the impact. Although the R 2 value, as an indicator of the model's goodness of fit, focuses more on local effects in the GWR, its variation still helps to assess the explanatory power of the model in each area. These comprehensive indicators together form a thorough evaluation of the GWR model's performance. Through a comprehensive evaluation of the GWR model results, we can more precisely capture the relationships between local variables, revealing the specific impact of environmental factors on CVD risk within different regions.

Kernel density distribution characteristics

By applying kernel density analysis, the spatial distribution pattern of CVD samples and various built environment elements was detailed, effectively capturing their spatial density characteristics. The obtained kernel density levels were divided into five tiers using the natural breaks method and arranged in descending order, as shown in Fig.  3 . Analysis results indicate that high-density areas of elements such as shopping, dining, transportation facilities, and medical care are mainly focused in the southeastern part of the city, i.e., the city center. The high-density areas of the road network extend along the southern Yonjiang belt and appear patchy in the city center. Dense areas of parks are mostly near the southern riverside areas, while high-density distributions of sports facilities extend in the southeastern and central regions. Overall, the distribution pattern of these environmental factors reveals that Xixiangtang District's development trend mainly extends from southeast to northwest, indicating that the northeastern part of the region is relatively underdeveloped, with a sparse population and a lack of various infrastructure layouts. Additionally, kernel density distribution characteristics show that high-incidence areas of CVDs are concentrated in the southeast, highly coinciding with the high-density areas of most built environment elements.

figure 3

Distribution of nuclear density of each element in the study area

Spatial Autocorrelation Characteristics

To explore the spatial relationship between urban built environment elements and the distribution of CVDs, spatial autocorrelation analysis was performed using Geoda software [ 58 ]. The study involved univariate and bivariate global spatial autocorrelation analyses (Table  2 ). The results of the analysis passed the significance level test at 0.01, with p values below 0.01 and Z values exceeding 2.58, achieving a 99% confidence level. This reinforces the reliability of the spatial autocorrelation results.

Univariate analysis is used to evaluate the clustering or dispersion status of feature points in space. In univariate analysis, the Moran's I value of the road network was 0.957, which significantly indicates a clustering trend in its spatial distribution. Moran's I values for other built environment elements, such as parks, transportation facilities, sports and fitness, and medical care, all exceeded 0.9, while the Moran's I values for shopping and dining also surpassed 0.8. By comparison, the Moran's I value for CVD samples was 0.697, approaching 0.7, revealing significant aggregation. Overall, the clustering nature of the built environment elements and CVD samples in Xixiangtang District implies that these elements are not randomly deployed but follow some patterns of hierarchical assembly.

Bivariate analysis, on the other hand, is used to evaluate the spatial correlation between different environmental factors and CVDs. Bivariate analysis further revealed the spatial interaction between environmental factors and CVDs. The results show that all considered environmental elements exhibited significant positive correlation with CVDs. The spatial association between medical care elements and CVDs was the strongest, with a Moran's I value of 0.431, surpassing the significant threshold of 0.4. Additionally, the Moran's I values for dining, transportation facilities, shopping, and sports and fitness were all over 0.3. Road networks and parks, on the other hand, showed relatively weaker correlations with CVDs, with Moran's I values around 0.1, indicating that in that region, the spatial connection between these built environment elements and CVDs is comparably weak.

Geodetector results analysis

A detailed analysis of the impact of various environmental factors on CVDs was achieved through the factor detection model of the GDM. According to the factor detection results shown in Table  3 , significant differences in the impact of environmental factors on the distribution of CVD samples were observed. The analysis results indicate that the environmental factors influencing the distribution of CVDs, in descending order of impact, are: healthcare services > shopping > dining > transportation facilities > sports and fitness > parks and squares > road networks. Specifically, healthcare services lead with a q value of 0.532, indicating that the spatial distribution of healthcare services has the most significant impact on the spatial distribution of CVDs. This highlights the importance of a high-density layout of healthcare facilities in the prevention and treatment of CVDs and suggests that individuals at risk for CVDs tend to prefer living in areas with convenient access to medical services [ 59 ].

Subsequently, shopping, dining, and transportation facilities all have q values exceeding 0.4, reflecting their significant effects on the urban built environment's clustering characteristics and regional commercial vitality. The concentration of human traffic brought about by these factors may, while increasing residents' lifestyle choices, also lead to certain psychological burdens and declining air quality, thereby indirectly placing a burden on the cardiovascular system. In contrast, parks and squares and road networks have relatively low q values (both less than 0.2), suggesting that the incidence of CVDs is lower in areas concentrated with these environmental elements, likely related to their ecological and transportation benefits.

Subsequently, interaction detection was used to analyze the synergistic impact of pairs of built environment elements on the spatial distribution of CVDs. From the results shown in Table  4 , it is evident that any two built environment elements exhibit a bi-factor enhancement effect on CVDs, suggesting that the combined influence of two built environment elements exceeds the effect of a single element. Among these, the interaction between healthcare services and shopping has the greatest impact on CVDs, with a value of 0.571. This indicates that CVDs patients or high-risk individuals tend to prefer living in areas rich in healthcare resources and convenient for shopping, as they can more easily access health services and daily necessities. Conversely, the interaction between road networks and parks and squares has the weakest impact on CVDs, with a value of 0.313. This suggests that their combined effect in reducing CVD risk is relatively limited, possibly due to the negative impacts of road networks, such as traffic congestion and air pollution, which may offset some of the health benefits provided by parks and squares. This result further validates an important point: the impact of the built environment on CVDs is not driven by a single element but by the synergistic effects of multiple environmental factors working together.

Geographically weighted regression analysis

The GDM revealed the influence of built environment factors on CVDs. To further uncover the spatial heterogeneity effects of built environment elements on CVDs in different regions, we employed the GWR model. To ensure the rigor of the analysis, we conducted multicollinearity detection for all built environment elements before establishing the model. We confirmed that the Variance Inflation Factor (VIF) values for all elements did not exceed the conventional threshold of 5, effectively avoiding multicollinearity issues and ensuring the robustness of the model results. The GWR model results showed that the model's coefficient of determination R 2 was 0.596, and the adjusted R 2 was 0.575, indicating that the model could adequately explain the relationships between variables in the study. The analysis results also highlighted the spatial non-stationarity of the effects of built environment elements, manifested by different degrees of variation and fluctuation characteristics, as shown by the coefficient magnitudes and their dynamic changes in spatial distribution in Table  5 .

Looking more closely at the details, as demonstrated in Fig.  4 , the regression coefficients of the dining elements fluctuated relatively little, ranging from -0.372 to 0.471, reflecting a relatively balanced spatial effect. Moreover, although this factor's impact in the Xixiangtang District showed both positive and negative aspects in different areas, more than half of the analysis units indicated positive values, especially in the southern and northeastern parts of the Xixiangtang District. In contrast, the high-incidence areas of CVDs in the eastern part and areas in the north showed negative correlations.

figure 4

Spatial distribution of regression coefficient of built-up environmental factors

The GWR coefficients and their fluctuations for parks were significant, ranging from -69.757 to 35.43, indicating significant spatial differences in their impact on the distribution of CVDs. Specifically, the spatial distribution of positive and negative impacts was nearly 1:1, revealing the complexity of its effects. In high-incidence areas of CVDs, the distribution of parks showed a significantly negative correlation with disease distribution, while a significant increase in positive correlation was observed north of the significantly negative regions. This implies the presence of other moderating factors influencing the direction of the impact of parks on CVDs.

The regression coefficients and fluctuations for shopping were the smallest among the seven environmental factors, confined to a range of -0.093 to 0.219, suggesting a high consistency in its spatial effects. In the Xixiangtang built-up area, nearly two-thirds of the spatial units yielded positive impacts. Particularly in the northern, northeastern, southern, and southeastern regions, the positive impacts of shopping were especially pronounced.

The regression coefficients and fluctuations for transportation facilities were relatively large, ranging from -0.487 to 7.363. For the Xixiangtang District, nearly three-quarters of the analysis units displayed positive spatial impacts, with the largest positive value areas concentrated in the southeastern part. However, areas with negative impacts from transportation facilities were relatively fewer, suggesting a clear positive correlation with the distribution of CVDs.

The fluctuation range for sports and fitness regression coefficients was also broad, from -10.578 to 33.256. The analysis indicated that only a quarter of the analysis units in the Xixiangtang District had a positive correlation. The most significant positive values were located near the high-density areas for CVDs, suggesting that sports and fitness facilities might have a positive correlation with the disease distribution in these areas. Meanwhile, the intensity of the negative correlation increased north of the areas with significant positive values, potentially pointing to other factors' potential moderating effects on the relationship between sports and CVDs.

The regression coefficients and their fluctuations for healthcare were relatively small, ranging from -1.235 to 3.352. In the Xixiangtang District, the vast majority of analysis units showed a positive correlation, especially in the northern regions. The southern areas exhibited negative correlations, highlighting potential differences in medical resources in that region.

Of all the built environment elements, road networks had the largest range of regression coefficients and fluctuations, swinging from -7905.743 to 411.617, demonstrating extremely strong spatial variability. Only a small portion of the spatial units in the Xixiangtang District showed positive correlations, while the significantly negative regions were mostly concentrated in high-incidence areas for CVDs. This phenomenon was similar to the negative correlation distribution trend of parks, pointing to a significantly negative correlation between park distribution and the distribution of CVDs. Notably, the effect of road networks was opposite to transportation facilities, which could be related to the connectivity of the road network and traffic congestion conditions, factors that could influence the incidence of CVDs.

This study reveals a high-density aggregation of CVDs and various built environment elements in the southeastern part of the study area, i.e., the urban central area. Through spatial statistical analysis, all examined environmental elements and CVDs showed high Moran's I values, indicating significant clustering in their spatial distribution. Furthermore, the positive spatial correlation between these environmental elements and CVDs corroborates the deep connection between the urban built environment and the incidence of CVDs.

Geodetector analysis reveals significant differences in the impact of different built environment elements on CVDs. Healthcare facilities had the most influence, followed by shopping, dining, and transportation facilities, while parks and road networks had relatively weaker impacts. Notably, the occurrence of CVDs is not only related to individual built environment elements but likely results from the combined effects of multiple elements. Further interaction detection analysis confirmed this hypothesis, finding that the joint impact of any two environmental elements was stronger than any individual element, showing a clear dual-factor enhancement effect. Specifically, the interaction between healthcare and shopping had the most significant impact on the distribution of CVDs, while the combined effect of road networks and parks was the least. By delving into individual factors and their interaction effects, this study reveals a comprehensive view of the impact of the built environment on CVDs, highlighting the complex relationships and differences between environmental elements and the occurrence of diseases.

The GWR model was used to analyze in detail how built environment elements affect CVDs in different regions, aiming to gain a deep understanding of the local effects of the built environment. The research results showed the regression coefficients of built environment elements and their range of variation. Specifically, the regression coefficients for dining exhibited relatively stable trends in spatial distribution. Although the overall impact was moderate, slight fluctuations revealed a slightly enhanced positive correlation in specific areas such as densely commercial or culturally vibrant dining regions. Particularly in the southern and northeastern parts, the combination of diverse dining options and frequent dining consumption patterns showed a slight positive correlation with CVD risk. This reflects the complex impact of dietary habits, food composition, and intake levels on cardiovascular health [ 60 , 61 ].

The regression coefficients for parks and squares showed relatively large fluctuations in spatial distribution, indicating significant regional heterogeneity. This is mainly due to factors such as differences in regional population density and per capita park and square area. In our study, the southeastern region, which is a high-incidence area for CVDs, exhibited negative regression coefficients for parks and squares. This is because this region is the central urban area with a high population density, leading to a significant shortage of per capita green space, thus showing a negative correlation. Conversely, in the northern region, where population distribution is more balanced and parks and squares are more abundant, the per capita green space is relatively sufficient. Therefore, CVD patients have more access to green spaces and exercise areas, showing a positive correlation [ 29 ].

The regression coefficients for shopping consumption showed the smallest fluctuations in spatial distribution. The positive and negative effects were not significantly different, with the positive effects being notably concentrated in the northern, northeastern, and southern commercial thriving areas. Compared to other regions, these areas might have relatively well-developed commercial facilities or superior shopping environments. This could indirectly affect CVD risk through various dimensions, such as physical exertion from walking or cycling during shopping and the regulation of psychological states like satisfaction and pleasure after shopping [ 44 ].

The regression coefficients for transportation facilities showed a significant positive correlation in high-incidence areas of CVDs, with notable fluctuations. This deeply reveals the direct and important impact of traffic conditions, especially congestion and pollution, on cardiovascular health across different regions. In traffic-dense areas such as city centers and transportation hubs, high traffic volume, severe congestion, and increased noise and air pollution collectively pose major threats to residents' cardiovascular health. This not only directly harms the cardiovascular system through accumulated psychological stress and exposure to air pollution but also further exacerbates the risk due to a lack of exercise opportunities [ 62 ].

The regression coefficients for sports and fitness facilities exhibited a high degree of heterogeneity in spatial distribution, showing a significant positive correlation in the southeastern high-incidence area for CVDs, which gradually shifts to a negative correlation towards the outer regions. This deeply reflects the regional differences in the allocation of sports and fitness facilities, residents' exercise habits, and participation rates. In areas with well-developed urban facilities and strong resident awareness of physical activity, the positive effects of sports and fitness activities on cardiovascular health are particularly significant. These activities effectively reduce CVD risk by enhancing physical activity, optimizing cardiopulmonary function, and lowering body fat percentage. However, in areas with relatively scarce sports facilities and poor exercise habits among residents, negative impacts may be observed, highlighting the potential threats to public health due to uneven distribution of sports resources and a lack of exercise culture [ 63 ].

The regression coefficients for healthcare services showed regional differences in spatial distribution. In the northern region, due to the lower population density, the abundance and superior quality of per capita healthcare resources have a significant positive effect on residents' cardiovascular health. In contrast, the southern region, with relatively scarce resources or limited service quality, fails to fully realize the potential benefits of healthcare services. This disparity not only reveals the current uneven distribution of healthcare resources but also emphasizes the importance of enhancing the equalization of healthcare services [ 64 ]. The positive impact of healthcare on CVDs is primarily achieved through efficient prevention, precise diagnosis, and timely treatment. Its effectiveness is influenced by multiple factors, including the sufficiency of medical resources, service quality, residents' healthcare-seeking behavior, medical policies, and technological advancements.

The road network and transportation facilities together constitute the urban transportation system. In the process of transportation planning, we advocate for the continuous optimization of the road network layout, reserving space for future traffic growth, and utilizing intelligent technology to optimize traffic signal management to alleviate congestion. Meanwhile, in the densely populated eastern and southeastern areas, we emphasize enhancing the convenience of public transportation by adding routes and optimizing station locations, making it the preferred mode of travel for residents. Additionally, measures such as the construction of sound barriers and green belts are implemented to effectively reduce noise and air pollution caused by public transportation. Furthermore, we actively promote green travel methods such as cycling and walking by building a comprehensive network of bike lanes and pedestrian paths, thereby promoting public health and environmental protection [ 20 ].

These findings provide a more comprehensive understanding of the complex interactions between built environment elements and CVDs. Therefore, it is essential to balance the integrated impact of these factors in urban planning and public health interventions. Based on a comprehensive analysis of existing research and our study's results, we propose the following viewpoints.

Firstly, healthcare is the primary factor influencing the distribution of CVDs. Living near medical institutions offers substantial benefits to cardiovascular patients, not only enhancing the accessibility of medical services but also helping to quickly respond to emergency medical situations, providing a sense of security for patients. We suggest establishing additional medical centers in the densely populated southeastern region to ensure that community members can easily access high-quality medical services [ 65 ].

Secondly, shopping and dining are the next most important factors affecting the spatial distribution of CVDs. Although the spatial variation of these factors is not significant, their long-term cumulative impact should not be overlooked. We recommend that future urban renewal or renovation efforts reasonably control and plan the density of commercial areas, especially in the eastern region. This requires ensuring that residents can enjoy convenient shopping services to meet their daily needs while avoiding the increased living costs and stress caused by excessive commercial concentration. Additionally, it is necessary to strengthen the management of dining environments, including encouraging dining establishments to offer more healthy food options, such as low-sugar, low-fat, and high-fiber dishes. It is also important to increase the availability of healthy dining options by establishing healthy restaurants and vegetarian eateries, while reasonably controlling and optimizing the layout and number of high-sugar and high-fat food outlets within communities to reduce health risks induced by frequent exposure to such foods [ 66 ].

Road networks and transportation facilities together form the city's transportation system. In transportation planning, we advocate for the continuous optimization of road network layouts, reserving space for future traffic growth, and leveraging intelligent technology to optimize traffic signal management to alleviate congestion. Additionally, enhancing the convenience of public transportation by adding routes and optimizing stops can make it the preferred mode of travel for residents. Complementing this with the construction of sound barriers and green belts can effectively reduce noise and air pollution caused by public transportation. Furthermore, promoting green travel methods such as cycling and walking by building a comprehensive network of cycling lanes and walking paths can foster both health and environmental benefits [ 20 ].

Sports and fitness facilities, along with parks and squares, are essential for improving residents' quality of life and promoting healthy lifestyles. During planning, sports and fitness facilities should be reasonably distributed, especially in the northern part of the study area, to ensure that all communities have convenient access to exercise amenities. Diverse fitness facilities catering to different age groups and exercise needs, such as basketball courts, soccer fields, and fitness equipment zones, should be provided to meet the varied exercise requirements of different groups. Additionally, parks and squares, as crucial spaces for residents' leisure and entertainment, should be planned with a harmonious balance of ecology and landscape. In densely populated and space-constrained southeastern areas, small green spaces, leisure seating, and children's play facilities can be added to provide residents with a pleasant environment for relaxation and nature interaction [ 67 ].

We have explored the mechanisms by which environmental elements impact CVDs and proposed suggestions for optimizing the urban built environment, but this paper still has certain limitations. The impact of the environment on health and disease is complex, and due to time and resource constraints, it was not possible to consider and analyze all potential variables comprehensively, which may have some impact on the research results. To further deepen the study of the relationship between the built environment and cardiovascular health, future research could consider the following aspects: first, expand the scope of research, collecting and analyzing data from different cities and regions to better understand geographical differences in the impact of the built environment on cardiovascular health; second, enhance the scientific nature of the research methods, using more objective and precise methods for data collection and analysis to improve the reliability and accuracy of the research; and finally, deepen the study of the mechanisms between the built environment and cardiovascular health, exploring biological and psychological mechanisms to better understand their relationship.

Focusing on the built-up area of Xixiangtang in Nanning City as the research area, this study delves into the intrinsic connection between the urban built environment and CVDs, uncovering several findings. Utilizing hospital cardiovascular data and urban POI data, and employing spatial analysis techniques such as KDE, spatial autocorrelation analysis, geodetectors, and GWR, we systematically assessed the extent and mechanisms through which various built environment elements impact CVDs. The results show a significant positive correlation between the urban built environment and CVDs. Particularly, healthcare facilities, shopping venues, restaurants, and transportation facilities have significant effects on the incidence and distribution of CVDs. The spatial aggregation of these elements and the dense distribution of CVDs demonstrate significant consistency, further confirming the close link between the built environment and CVDs. Simultaneously, we discovered spatial heterogeneity in the impact of different built environment elements on CVDs. This indicates that in planning and improving the urban environment, elements and areas with a greater impact on CVDs should be considered specifically.

Availability of data and materials

The datasets used or analyzed during the current study are available from the corresponding author upon reasonable request.

Abbreviations

Cardiovascular Disease

Geographically weighted regression

Multiscale geographically weighted regression

The GeoDetector method

OpenStreetMap

Kernel Density Estimation

Points of Interest

Variance Inflation Factor

Application Programming Interface

Roth GA, Mensah GA, Fuster V. The global burden of cardiovascular diseases and risks: a compass for global action. American College of Cardiology Foundation Washington DC; 2020. p. 2980–1.

Masaebi F, Salehi M, Kazemi M, Vahabi N, Azizmohammad Looha M, Zayeri F. Trend analysis of disability adjusted life years due to cardiovascular diseases: results from the global burden of disease study 2019. BMC Public Health. 2021;21:1–13.

Article   Google Scholar  

Murray CJ, Aravkin AY, Zheng P, Abbafati C, Abbas KM, Abbasi-Kangevari M, et al. Global burden of 87 risk factors in 204 countries and territories, 1990–2019: a systematic analysis for the Global Burden of Disease Study 2019. The lancet. 2020;396(10258):1223–49.

Bloom DE, Cafiero E, Jané-Llopis E, Abrahams-Gessel S, Bloom LR, Fathima S, et al. The global economic burden of noncommunicable diseases. Program on the Global Demography of Aging; 2012.

Xu J, Jing Y, Xu X, Zhang X, Liu Y, He H, et al. Spatial scale analysis for the relationships between the built environment and cardiovascular disease based on multi-source data. Health Place. 2023;83:103048.

Article   PubMed   Google Scholar  

Sarkar C, Webster C, Gallacher J. Are exposures to ready-to-eat food environments associated with type 2 diabetes? A cross-sectional study of 347 551 UK Biobank adult participants. Lancet Planetary Health. 2018;2(10):e438–50.

Grazuleviciene R, Andrusaityte S, Gražulevičius T, Dėdelė A. Neighborhood social and built environment and disparities in the risk of hypertension: A cross-sectional study. Int J Environ Res Public Health. 2020;17(20):7696.

Article   PubMed   PubMed Central   Google Scholar  

Ghosh-Dastidar B, Cohen D, Hunter G, Zenk SN, Huang C, Beckman R, et al. Distance to store, food prices, and obesity in urban food deserts. Am J Prev Med. 2014;47(5):587–95.

Braun LM, Rodríguez DA, Evenson KR, Hirsch JA, Moore KA, Roux AVD. Walkability and cardiometabolic risk factors: cross-sectional and longitudinal associations from the multi-ethnic study of atherosclerosis. Health Place. 2016;39:9–17.

Anza-Ramirez C, Lazo M, Zafra-Tanaka JH, Avila-Palencia I, Bilal U, Hernández-Vásquez A, et al. The urban built environment and adult BMI, obesity, and diabetes in Latin American cities. Nat Commun. 2022;13(1):7977.

Hartig T, Evans GW, Jamner LD, Davis DS, Gärling T. Tracking restoration in natural and urban field settings. J Environ Psychol. 2003;23(2):109–23.

Levy L. Dietary strategies, policy and cardiovascular disease risk reduction in England. Proceedings of the Nutrition Society. 2013;72(4):386–9.

Dalal HM, Zawada A, Jolly K, Moxham T, Taylor RS. Home based versus centre based cardiac rehabilitation: Cochrane systematic review and meta-analysis. BMJ. 2010;340.

Humpel N, Owen N, Leslie E. Environmental factors associated with adults’ participation in physical activity: a review. Am J Prev Med. 2002;22(3):188–99.

Jia X, Yu Y, Xia W, Masri S, Sami M, Hu Z, et al. Cardiovascular diseases in middle aged and older adults in China: the joint effects and mediation of different types of physical exercise and neighborhood greenness and walkability. Environ Res. 2018;167:175–83.

Murtagh EM, Nichols L, Mohammed MA, Holder R, Nevill AM, Murphy MH. The effect of walking on risk factors for cardiovascular disease: an updated systematic review and meta-analysis of randomised control trials. Prev Med. 2015;72:34–43.

Newby DE, Mannucci PM, Tell GS, Baccarelli AA, Brook RD, Donaldson K, et al. Expert position paper on air pollution and cardiovascular disease. Eur Heart J. 2015;36(2):83–93.

Chum A, O’Campo P. Cross-sectional associations between residential environmental exposures and cardiovascular diseases. BMC Public Health. 2015;15:1–12.

Diener A, Mudu P. How can vegetation protect us from air pollution? A critical review on green spaces’ mitigation abilities for air-borne particles from a public health perspective-with implications for urban planning. Sci Total Environ. 2021;796:148605.

Nieuwenhuijsen MJ. Influence of urban and transport planning and the city environment on cardiovascular disease. Nat Rev Cardiol. 2018;15(7):432–8.

Chandrabose M, den Braver NR, Owen N, Sugiyama T, Hadgraft N. Built environments and cardiovascular health: review and implications. J Cardiopulm Rehabil Prev. 2022;42(6):416–22.

Lee E, Choi J, Lee S, Choi B. P70 Association between built environment and cardiovascular diseases. BMJ Publishing Group Ltd; 2019.

Sallis JF, Floyd MF, Rodríguez DA, Saelens BE. Role of built environments in physical activity, obesity, and cardiovascular disease. Circulation. 2012;125(5):729–37.

Chandrabose M, Rachele JN, Gunn L, Kavanagh A, Owen N, Turrell G, et al. Built environment and cardio-metabolic health: systematic review and meta-analysis of longitudinal studies. Obes Rev. 2019;20(1):41–54.

Ewing R, Cervero R. “Does compact development make people drive less?” The answer is yes. J Am Plann Assoc. 2017;83(1):19–25.

Loo CJ, Greiver M, Aliarzadeh B, Lewis D. Association between neighbourhood walkability and metabolic risk factors influenced by physical activity: a cross-sectional study of adults in Toronto, Canada. BMJ Open. 2017;7(4):e013889.

Dendup T, Feng X, Clingan S, Astell-Burt T. Environmental risk factors for developing type 2 diabetes mellitus: a systematic review. Int J Environ Res Public Health. 2018;15(1):78.

Malambo P, Kengne AP, De Villiers A, Lambert EV, Puoane T. Built environment, selected risk factors and major cardiovascular disease outcomes: a systematic review. PLoS ONE. 2016;11(11):e0166846.

Seo S, Choi S, Kim K, Kim SM, Park SM. Association between urban green space and the risk of cardiovascular disease: A longitudinal study in seven Korean metropolitan areas. Environ Int. 2019;125:51–7.

Yeager RA, Smith TR, Bhatnagar A. Green environments and cardiovascular health. Trends Cardiovasc Med. 2020;30(4):241–6.

Liu M, Meijer P, Lam TM, Timmermans EJ, Grobbee DE, Beulens JW, et al. The built environment and cardiovascular disease: an umbrella review and meta-meta-analysis. Eur J Prev Cardiol. 2023;30(16):1801–27.

Koohsari MJ, McCormack GR, Nakaya T, Oka K. Neighbourhood built environment and cardiovascular disease: knowledge and future directions. Nat Rev Cardiol. 2020;17(5):261–3.

Howell NA, Tu JV, Moineddin R, Chen H, Chu A, Hystad P, et al. Interaction between neighborhood walkability and traffic-related air pollution on hypertension and diabetes: the CANHEART cohort. Environ Int. 2019;132:104799.

Patino JE, Hong A, Duque JC, Rahimi K, Zapata S, Lopera VM. Built environment and mortality risk from cardiovascular disease and diabetes in Medellín, Colombia: An ecological study. Landsc Urban Plan. 2021;213:104126.

Şener R, Türk T. Spatiotemporal analysis of cardiovascular disease mortality with geographical information systems. Appl Spat Anal Policy. 2021;14(4):929–45.

Baptista EA, Queiroz BL. Spatial analysis of cardiovascular mortality and associated factors around the world. BMC Public Health. 2022;22(1):1556.

Lee EY, Choi J, Lee S, Choi BY. Objectively measured built environments and cardiovascular diseases in middle-aged and older Korean adults. Int J Environ Res Public Health. 2021;18(4):1861.

Pourabdollah A, Morley J, Feldman S, Jackson M. Towards an authoritative OpenStreetMap: conflating OSM and OS OpenData national maps’ road network. ISPRS Int J Geo Inf. 2013;2(3):704–28.

Mazidi M, Speakman JR. Association of Fast-Food and Full-Service Restaurant Densities With Mortality From Cardiovascular Disease and Stroke, and the Prevalence of Diabetes Mellitus. J Am Heart Assoc. 2018;7(11):e007651.

Grazuleviciene R, Vencloviene J, Kubilius R, Grizas V, Dedele A, Grazulevicius T, et al. The effect of park and urban environments on coronary artery disease patients: a randomized trial. BioMed Res Int. 2015;2015.

Haralson MK, Sargent RG, Schluchter M. The relationship between knowledge of cardiovascular dietary risk and food shopping behaviors. Am J Prev Med. 1990;6(6):318–22.

Hoevenaar-Blom MP, Wendel-Vos GW, Spijkerman AM, Kromhout D, Verschuren WM. Cycling and sports, but not walking, are associated with 10-year cardiovascular disease incidence: the MORGEN Study. Eur J Prev Cardiol. 2011;18(1):41–7.

Sepehrvand N, Alemayehu W, Kaul P, Pelletier R, Bello AK, Welsh RC, et al. Ambulance use, distance and outcomes in patients with suspected cardiovascular disease: a registry-based geographic information system study. Eur Heart J. 2020;9(1_suppl):45–58.

Google Scholar  

Malambo P, De Villiers A, Lambert EV, Puoane T, Kengne AP. The relationship between objectively-measured attributes of the built environment and selected cardiovascular risk factors in a South African urban setting. BMC Public Health. 2018;18:1–9.

Chen W, Liu L, Liang Y. Retail center recognition and spatial aggregating feature analysis of retail formats in Guangzhou based on POI data. Geogr Res. 2016;35(4):703–16.

Feng L, Lei G, Nie Y. Exploring the eco-efficiency of cultivated land utilization and its influencing factors in black soil region of Northeast China under the goal of reducing non-point pollution and net carbon emission. Environmental Earth Sciences. 2023;82(4):94.

Guan Z, Wang T, Zhi X. Temporal-spatial pattern differentiation of traditional villages in central plains economic region. Econ Geogr. 2017;37(9):225–32.

Chen Y. Development and method improvement of spatial autocorrelation theory based on Moran statistics. Geogr Res. 2009;28(6):1449–63.

Pang R, Teng F, Wei Y. A gwr-based study on dynamic mechanism of population urbanization in JIlin province. Sci Geogr Sin. 2014;34:1210–7.

Anselin L, Rey SJ. Modern spatial econometrics in practice: A guide to GeoDa, GeoDaSpace and PySAL. (No Title). 2014.

Zhang Z, Shan B, Lin Q, Chen Y, Yu X. Influence of the spatial distribution pattern of buildings on the distribution of PM2. 5 concentration. Stochastic Environmental Research and Risk Assessment. 2022:1–13.

Dehnad K. Density estimation for statistics and data analysis. Taylor & Francis; 1987.

Wang JF, Li XH, Christakos G, Liao YL, Zhang T, Gu X, et al. Geographical detectors-based health risk assessment and its application in the neural tube defects study of the Heshun Region, China. Int J Geogr Inf Sci. 2010;24(1):107–27.

Shu T, Ren Y, Shen L, Qian Y. Study on spatial heterogeneity of consumption vibrancy and its driving factors in large city: a case of Chengdu City. Urban Development Studies. 2020;27(1):16–21.

Jinfeng W, Chengdong X. Geodetector: Principle and prospective. Acta Geogr Sin. 2017;72(1):116–34.

Feuillet T, Commenges H, Menai M, Salze P, Perchoux C, Reuillon R, et al. A massive geographically weighted regression model of walking-environment relationships. J Transp Geogr. 2018;68:118–29.

Yu H, Gong H, Chen B, Liu K, Gao M. Analysis of the influence of groundwater on land subsidence in Beijing based on the geographical weighted regression (GWR) model. Sci Total Environ. 2020;738:139405.

Anselin L. An introduction to spatial autocorrelation analysis with GeoDa. Spatial Analysis Laboratory: University of Illinois, Champagne-Urbana, Illinois; 2003.

Nicholl J, West J, Goodacre S, Turner J. The relationship between distance to hospital and patient mortality in emergencies: an observational study. Emerg Med J. 2007;24(9):665–8.

Osman AA, Abumanga ZM. The relationship between physical activity status and dietary habits with the risk of cardiovascular diseases. E Journal of Cardiovascular Medicine. 2019;7(2):72.

Shan Z, Li Y, Baden MY, Bhupathiraju SN, Wang DD, Sun Q, et al. Association between healthy eating patterns and risk of cardiovascular disease. JAMA Intern Med. 2020;180(8):1090–100.

Münzel T, Treede H, Hahad O, Daiber A. Too loud to handle? Transportation noise and cardiovascular disease. Can J Cardiol. 2023;39(9):1204–18.

Halonen JI, Stenholm S, Kivimäki M, Pentti J, Subramanian S, Kawachi I, et al. Is change in availability of sports facilities associated with change in physical activity? A prospective cohort study Prev Med. 2015;73:10–4.

PubMed   Google Scholar  

Wekesah FM, Kyobutungi C, Grobbee DE, Klipstein-Grobusch K. Understanding of and perceptions towards cardiovascular diseases and their risk factors: a qualitative study among residents of urban informal settings in Nairobi. BMJ Open. 2019;9(6):e026852.

Berlin C, Panczak R, Hasler R, Zwahlen M. Do acute myocardial infarction and stroke mortality vary by distance to hospitals in Switzerland? Results from the Swiss National Cohort Study. BMJ Open. 2016;6(11):e013090.

Lim K, Kwan Y, Tan C, Low L, Chua A, Lee W, et al. The association between distance to public amenities and cardiovascular risk factors among lower income Singaporeans. Preventive medicine reports. 2017;8:116–21.

Pereira G, Foster S, Martin K, Christian H, Boruff BJ, Knuiman M, et al. The association between neighborhood greenness and cardiovascular disease: an observational study. BMC Public Health. 2012;12:1–9.

Download references

Acknowledgements

Not appliable.

The General Project of Humanities and Social Sciences Research of the Ministry of Education in 2020: A Study on the Assessment and Planning of Healthy Cities Based on Spatial Data Mining (No. 20YJA630011) and the Natural Resources Digital Industry Academy Construction Project.

Author information

Authors and affiliations.

School of Geographical and Planning, Nanning Normal University, Nanning, 530100, Guangxi, China

Shuguang Deng, Jinlong Liang, Jinhong Su & Shuyan Zhu

School of Architecture, Guangxi Arts University, Nanning, 530009, Guangxi, China

Fatulty of Innovation and Design, City University of Macau, Macau, 999078, China

You can also search for this author in PubMed   Google Scholar

Contributions

D.S. Provides research topics, conceptual guidance, translation, paper revision and financial support; L.J. Conceived the framework and wrote the original draft; P.Y. Manuscript checking, chart optimization; L.W. Provided suggestions for revision, and reviewed and edited them; S.J. Is responsible for data acquisition and editing; Z.S. Edits the visual map.

Corresponding author

Correspondence to Jinlong Liang .

Ethics declarations

Ethics approval and consent to participate.

Our study was conducted in accordance with the ethical principles outlined in the Declaration of Helsinki, as well as relevant national and institutional guidelines for human research. The study received approval from the Medical Ethics Committee of Guangxi Zhuang Autonomous Region Nationality Hospital (Approval No.: 2024–65). The de-identified data records from the cardiovascular department that we accessed and analyzed were authorized by Guangxi Nationality Hospital. These data were collected and maintained in compliance with the hospital's patient data management policies and procedures. Given that our study involved only a retrospective analysis of existing medical records, with no direct interaction with patients and no potential for causing any substantial harm, the Medical Ethics Committee of Guangxi Zhuang Autonomous Region Nationality Hospital determined that individual patient informed consent was not required. Nonetheless, we have ensured that all data used in the study were fully anonymized and protected, adhering to the highest standards of confidentiality and privacy.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Deng, S., Liang, J., Peng, Y. et al. Spatial analysis of the impact of urban built environment on cardiovascular diseases: a case study in Xixiangtang, China. BMC Public Health 24 , 2368 (2024). https://doi.org/10.1186/s12889-024-19884-x

Download citation

Received : 27 April 2024

Accepted : 26 August 2024

Published : 31 August 2024

DOI : https://doi.org/10.1186/s12889-024-19884-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Built environment
  • Impact mechanism

BMC Public Health

ISSN: 1471-2458

case study on financial data analysis

  • Kreyòl Ayisyen

Consumer Financial Protection Bureau

Cash-back Fees

Executive summary, cash-back transactions, benefits and costs to merchants.

Access to cash is a necessary component of a resilient financial system and dynamic economy. Many people rely on cash for day-to-day transactions due its privacy and reliability, and cash accessibility is particularly critical in the case of a disruption or outage of digital payment systems. While people use various means of getting cash, one common method is to get “cash back” at a store when making a purchase with a debit or prepaid card. This option may be particularly important in banking deserts and in areas where banks and ATM operators charge significant fees. Retailers are essentially filling a void in access to cash, which has historically been supplied by banks and credit unions in an affordable way.

Providing cash back is valuable to consumers and merchants. Survey data show that it is a popular method to get money via consumers’ bank debit or prepaid cards. Merchants offer cash back to attract customers and reduce their cash handling costs. In its recent engagement and market monitoring, the CFPB observed that some retailers charge a fee for this transaction.

This spotlight provides an overview of consumers’ use of cash back, the benefits and costs of such transactions to merchants, and the practices of other market actors which do not charge fees for this service. The CFPB also analyzed the cash-back fees of a sample of national retailers.

Fees for cash back may serve as a barrier and reduce people’s access to cash when they need it. The CFPB will continue to monitor developments related to the fees consumers pay for accessing cash, and the underlying failure of banks and credit unions to adequately supply cash throughout the country in an affordable manner.

Key Findings

  • Cash-back fees are costing consumers millions of dollars . The CFPB found that three companies in the sample charge cash-back fees and estimates that they collect over $90 million in fees annually for people to access their cash. The CFPB also estimates that the marginal cost to merchants for processing each transaction may be a few pennies, compared to the much higher fees charged by these retailers to consumers. While there may be other costs related to cash handling, these are generally reduced by the provision of cash back, as it reduces merchants’ cash on hand.
  • Three major firms charge cash-back fees even though other competitors offer it for free. Three retail companies Dollar General, Dollar Tree, and Kroger, which also operate brands such as Family Dollar, Harris Teeter, Ralph’s, and others, charge fees for this service while other national retail companies sampled by the CFPB do not charge a fee. At the two largest dollar store corporations, cash-back fees for small withdrawal amounts are the highest in the sample ($1 or more for amounts under $50). Kroger, the country’s largest grocery chain, recently expanded cash-back fees to its Harris Teeter brand (75 cents for $100 or less), higher than those in place among its other brands (50 cents for $100 or less), in addition to higher fees for larger amounts.
  • Cash-back fees are levied on low pre-set cash withdrawal amounts . Many merchants pre-determine the withdrawal amount options in a single transaction, commonly between $5 and $50. The fees charged on small, constrained amounts often constitute a high percentage of the cash withdrawal and limit consumers’ ability to spread the cost of that fee over larger amounts. It may also induce repeat withdrawals, with consumers incurring a new fee each time.
  • Consumers with lower incomes or fewer banking choices may be more likely to encounter cash-back fees . Dollar stores are frequently located in small rural towns, communities of color, and low-income communities. These areas are also more likely to be places where there are fewer branch locations, and communities where people are more reliant on cash for daily transactions than others.

This section summarizes the importance of cash availability and the use of cash-back as an access point for consumers.

Cash is a critical part of a resilient payment ecosystem. Surveys show people still try to have cash on hand 1 and nearly 90 percent of people used cash in the last 30 days. 2 Cash accessibility is necessary should other types of digital payment systems experience failures, 3 such as in the event of a natural disaster or some other catastrophe, 4 or a technological malfunction at a single company. 5 Additionally, some populations are more reliant on cash than others for day-to-day transactions. For example, cash is more frequently used by people with lower incomes, racial minorities, and older Americans than other populations. 6 As discussed below, cash back is a common method for obtaining cash for many consumers.

How cash back works

Consumers may obtain cash during the completion of a purchase transaction at certain stores when using a PIN-authenticated debit card or prepaid card at the register. Some merchants also provide cash back at self-service registers. Consumers typically must choose from pre-set withdrawal amount options presented at the payment terminal at the time of the transaction. In a cash-back transaction, consumers are usually limited to a maximum withdrawal amount ranging from $5 to $50, though some merchants may allow higher amounts.

Scope of usage

CFPB analysis of data from the Diary and Survey of Consumer Payment Choice (Survey) found that from 2017 to 2022, cash withdrawals at retail locations made up 17 percent of all transactions by which people got cash from their checking account, savings account, or prepaid card. As shown in Figure 1, cash withdrawals at retail are second only to ATMs (61%) and more frequently used than bank tellers (14%). The Survey and methodology are discussed in the Tables and Notes section .

Figure 1: Instances of getting cash from bank account or prepaid card, by location, 2017 to 2022, combined

Pie chart showing ATM 61%, Retail point-of-sale 17%, Bank teller 14%, and Other 8%.

Source : CFPB tabulations of the Diary and Survey of Consumer Payment Choice.

The Survey data also show that from 2017 to 2022, cash withdrawals at a retail location (restricted to those where the source of funds was the consumer’s checking, savings, or a prepaid card) had a mean withdrawal amount of $34 (median: $20). 7 By contrast, during this same timeframe, the mean ATM withdrawal among survey participants was $126 (median: $100). 8 A study by researchers at the Federal Reserve Bank of Atlanta utilizing Survey data found that cash withdrawals at a retail store had the lowest average amount of cash withdrawal, and noted that “[t]he amount of cash received at a retail store is constrained by the store’s limits, so the amount of cash received in this way is not necessarily at the discretion of the consumer.” 9

Cash back may serve as a particularly important point of access in the absence of other banking services. A 2014 study by the Federal Reserve Bank of Richmond analyzed cash-back transactions from a national discount retail chain from 2010 to 2012. 10 Looking specifically at the Richmond bank’s district, the area with the highest frequency of cash-back transactions was in the southeastern region of South Carolina, an area “that has been subject to ‘persistent poverty’” and “has some of the sparsest dispersion of bank branches.” 11 The study also illustrated the lucrative nature of cash-back fees: During the course of this study period, the merchant introduced a fee for cash back. Data from this report indicates that the retailer collected approximately $21 million in cash-back fees in a year. 12

Merchants benefit from offering cash back at point-of-sale. First, the service may attract potential shoppers, either people making a purchase in order to get cash back or people who prefer one retail location over another in order to conveniently combine tasks. Second, it reduces merchants’ cash handling costs. 13 Dispensing cash to consumers, such as through cash-back transactions, reduces merchants’ supply of cash and therefore also reduces their cost of handling, transporting, and depositing excess cash.

Merchants incur costs for processing any type of payment transaction, including cash-back transactions. On any purchase using an electronic payment method, including a PIN-authorized debit-card or prepaid card, a merchant will incur a range of fees for processing that payment, such as interchange, network, and processing fees. While the merchant incurs these fees for a consumer’s purchase, there is an additional cost for providing cash back to the consumer.

To assess this additional transaction cost to the merchant for providing cash back, the CFPB modeled potential scenarios based on publicly available data and our market monitoring activities. The model incorporates estimates of merchant-incurred fees, such as interchange, network, processing, and fraud control fees. Methodology is discussed in detail in the Table and Figure Notes. The CFPB estimates that the additional marginal transactional cost to a merchant for processing a typical cash-back debit card transaction may range from a penny to about 20 cents (Table 1).

Table 1: Estimated additional merchant cost of a debit card cash-back transaction

Example Retailer Purchase Amount Merchant Transaction Cost for Purchase Only Additional Merchant Cost for $10 Cash Back Additional Merchant Cost for $40 Cash Back

National Discount Chain

$20

$0.33

$0.05

$0.19

National Grocery Store

$20

$0.33

$0.01

$0.02

Source : CFPB calculations based on public data about industry practices and averages. See Table and Figure Notes below for methodology .

This section provides an analysis of cash-back fee practices of eight national retail chains. It includes a discussion of the variation of these practices among these national chains and other actors, such as local independent grocers. The analysis is supplemented by market monitoring discussions with merchants about fees, costs, and consumer trends, both among merchants who charge cash back fees and those who do not. The CFPB also conducted consumer experience interviews and reviewed consumer complaints submitted to the CFPB. It concludes with a discussion of how these fees appear to function differently than fees for cash withdrawals at ATMs.

Current market practices

As of August 2024, there is no publicly available survey data regarding merchants’ cash-back practices or fees. To establish a baseline, the CFPB documented the fee practices of eight large retail companies. The sample consists of the two largest retail actors, measured by number of locations, across four different sectors: Dollar Stores, Grocery Stores, Drugstores, and Discount Retailers. 14 Using this approach, the eight retailers sampled are: Dollar General and Dollar Tree Inc. (Dollar Stores), Kroger Co. and Albertsons Companies (Grocery Stores), Walgreens and CVS (Drugstores), and Walmart and Target (Discount Retailers).

All retailers in our sample offer cash-back services, but only Dollar General, Dollar Tree Inc., and Kroger Co. brands charge a fee. Other retailers offer cash-back for free, even for withdrawal amounts similar to or larger than those provided by the three retailers who charge. (Table 2). Among the national chains that charge these cash-back fees, the CFPB estimates that they collect over $90 million in fees annually for people to access their cash. 15

Table 2: Cash-back fee practices, major retail companies

Company U.S. Stores Fee for Cash Back Maximum Withdrawal Amount (Per Transaction)

Dollar General

20,022

$1 to $2.50, depending on amount and other variables

$40

Dollar Tree Inc.
(Family Dollar and Dollar Tree)

16,278

Family Dollar: $1.50
Dollar Tree: $1

$50

Kroger Co.
(incl. Kroger, Ralph’s, Fred Meyer, Pick ‘n Save, and other brands)

2,722

Harris Teeter brand:
75 cents for ≤ $100; $3.00 for >$100
Other brands:
50 cents for ≤$100, $3.50 for >$100

Harris Teeter brand: $200
Other brands: $300

Albertsons Brand

2,271

No

$200

Walmart

5,214

No

$100

Target

1,956

No

$40

Walgreens

8,600

No

$20

CVS

7,500

No

$60

Source : CFPB analysis of the retail cash-back market. See Table and Figure Notes for methodology .

Beyond these national chains, there are other providers offering cash back as a free service to their customers. Through its market monitoring activities, the CFPB observed that many local independent grocers offer the service, but do not charge a fee. They do not charge a fee even though they are likely to have thinner profit margins and less bargaining power than national chains to negotiate on pricing on costs they incur from wholesalers or fees for payment processors. The U.S. Postal Service also offers cash back on debit transactions, in increments of $10 up to a $50 maximum, free of charge. 16

Cash-back fees at dollar stores

Among the merchants sampled, Dollar General and Dollar Tree Inc. charge the highest fees for withdrawal amounts under $50. These fees combined with the constrained withdrawal amount may mean that the fee takes up a hefty percentage relative to the amount of cash withdrawn, and people may be less able to limit the impact of the fee by taking out more cash.

Additionally, the geographic distribution of dollar store chains and their primary consumer base raises concerns that these fees may be borne by economically vulnerable populations and those with limited banking access. Dollar stores are prevalent in rural communities, low-income communities, and communities of color – the same communities who may also face challenges in accessing banking services. 17 For example, Dollar General noted that in 2023 “approximately 80% of [its] stores are located in towns of 20,000 or fewer people,” 18 while Dollar Tree Inc. operated at least 810 dual-brand combination stores (Family Dollar and Dollar Tree in a single building) designed specifically “for small towns and rural communities…with populations of 3,000 to 4,000 residents.” 19

Though they are open to and serve consumers of all income levels, dollar stores report that they locate stores specifically to serve their core customers: lower-income consumers. 20 In urban communities, one study shows, “proximity to dollar stores is highly associated with neighborhoods of color even when controlling for other factors.” 21 These same communities may also face challenges in accessing banking services. Low-income communities and communities of color often face barriers to access to banking services, and rural communities are 10 times more likely to meet the definition of a banking desert than urban areas. 22

Though the dollar store concept existed as far back as the 1950s, it has experienced significant expansion and consolidation since the 2000s. 23 Dollar Tree Inc. acquired Family Dollar in 2015. 24 From 2018 to 2021, nearly half of all retail locations opened in the U.S. were dollar stores. 25 In research examining the impact of dollar store expansion, studies indicate that the opening of a dollar store is associated with the closure of nearby local grocery retailers. 26

Variation of fees charged

In its scan of current market practices, the CFPB found variations in fee charges among store locations and brands owned by the same company. For example, as reflected in Table 2, Dollar Tree charges consumers $1 for cash back at Dollar Tree branded stores, but $1.50 in its Family Dollar stores. Similarly, Kroger Co. has two different fee tiers for its brands. In 2019, Kroger Co. rolled out a $0.50 cash-back fee for amounts of $100 or less, and $3.50 for amounts between $100 and $300. This took effect at brands such as Kroger, Fred Meyers, Ralph’s, QFC, Pick ‘N Save, and others. At the time of the rollout, the company noted two exceptions: Electronic benefits transfer (EBT) card users would not be charged a fee, and customers using their Kroger Plus card would not be charged for amounts under $100 but would be charged $0.50 for larger amounts. Kroger Co. acquired the southern grocery chain Harris Teeter in 2014, but it did not begin charging a cash-back fee at those stores until January 2024, at $0.75 for amounts of $100 or less, and $3 for larger amounts. 27

In its engagement with stakeholders, the CFPB learned that Dollar General’s fees appeared to vary in different locations. To better understand this potential variation, in December 2022, the CFPB mystery shopped at nine locations in one state, across a mix of rural, suburban, and urban communities. The CFPB acknowledges this is a small sample and is not intended to be representative. The data collected is based on the knowledge of the store associates at the time of each interaction.

In these findings, the CFPB learned of a range of fee variations across store locations: five of the nine respondents noted that the fee varies depending on the type of card used for the transaction. When probed for the meaning of “type of card,” most noted that it is dependent on the customer’s bank, though it is not exactly clear what fees will be triggered by what card type prior to initiating the transaction. Additionally, reported fees range from $1 to $2.50, with some stores reporting a flat fee structure of $1.50 and others reporting a range that tiered up with larger withdrawal amounts (with a cap of withdrawal amounts at $40). Most stores in this sample had a range of fees between $1.00 and $1.50, although two stores located in small, completely rural counties had a higher range of fees. The store located in the smallest and most isolated county within the sample, with only about 3,600 people, had the highest reported fee amount of $2.50.

Distinction from ATM fees

One of the market dynamics likely contributing to retailers’ ability to charge these fees is the high fees also charged to consumers for using out-of-network automated teller machines (ATMs). One source estimates that the average out-of-network ATM fee is $4.77, accounting for both the surcharge fee charged by the ATM owner and the foreign fee charged by the consumer’s financial institution. 28 By comparison, a $2 fee for cash back at a retailer may appear cheaper, and usually does not trigger an additional fee by the consumers’ financial institution or prepaid card issuer. Notwithstanding the high ATM fees, there are reasons for focused attention on the consumer risk of cash-back fees charged by retailers, primarily the amount of the fee relative to the value of the cash withdrawal and the distribution of the fee burden across income groups.

In a typical ATM transaction, a consumer has a greater ability to distribute the cost of the fee across a larger amount of cash than with cash back. There may be some exceptions to this for consumers who have only $10 or $20 in their bank account, but as shown in Table 3, low-income consumers and others withdraw greater amounts at ATMs than via cash-back, on average. In cash-back transactions, lower withdrawal limits are in place, and consumers do not have that option to withdraw larger amounts. CFPB analysis of the Diary and Survey of Consumer Payment Choice from 2017 to 2022 show that even among consumers with incomes below $50,000, the amount withdrawn at an ATM is more than double the typical cash-back withdrawal amount. Additionally, for the average and median amounts, across all incomes the ATM withdrawal amounts are larger than cash-back withdrawal amounts. (Table 3).

Table 3: Average ATM and cash-back withdrawal amounts, by income, 2017 to 2022 combined

Income Average ATM Withdrawal Average Cash-back Withdrawal Median ATM Withdrawal Median Cash-back Withdrawal

Less than $25,000

$144

$45

$65

$20

$25,000 to $49,999

$113

$35

$60

$25

$50,000 to $74,999

$113

$29

$84

$20

$75,000 to $99,000

$114

$45

$100

$26

$100,000 or more

$146

$33

$100

$20

Source: CFPB tabulations of the Diary and Survey of Consumer Payment Choice. See Table and Figure Notes for methodology .

Further, while merchants limit the amount of a single withdrawal, there is no limit on the number of withdrawals. So, if a consumer needs $100 cash at a store which limits a single withdrawal to a maximum amount of $50 with a $2 fee, the consumer would have to make two $50 withdrawals for a $4 fee plus the cost of any otherwise unwanted purchase required to access the cash-back service.

Finally, the burden of cash-back fees may be distributed differently than ATM fee burdens. The share of borrowers who pay ATM fees for cash withdrawals is relatively evenly distributed across income levels, according to a study based on the Diary and Survey of Consumer Payment Choice. 29 The study found little variation in the percentage of consumers who encountered a fee for an ATM cash withdrawal by income quintile, though the study did not look at the amount of the ATM fees paid. Analogous data are not available for cash-back fees, but a similarly even distribution across incomes is unlikely given the demographics of the consumer base served by the largest retailers which charge fees (dollar stores).

While the use of digital payment methods is on the rise, cash accessibility remains a critical component of a resilient financial infrastructure and dynamic economy. Bank mergers, branch closures, and bank fee creep have reduced the supply of free cash access points for consumers. In this void, people may be more reliant on retailers for certain financial services historically provided by banks and credit unions, such as cash access. In this context, we observe that some retailers provide cash back as a helpful service to their customers, while other retailers may be exploiting these conditions by charging fees to their consumers for accessing their cash.

This spotlight examines the presence of retailer cash-back fees and impact to consumers. Cash-back fees are being levied by just a small handful of large retail conglomerates (Dollar General, Dollar Tree Inc., and Kroger Co.) amidst a backdrop of consolidation in these segments. Meanwhile, other larger retailers continue to offer cash-back services free. The CFPB estimates cash-back fees cost consumers about $90 million a year.

The CFPB is concerned that reduced access to cash undermines the resilience of the financial system and deprives consumers of a free, reliable, and private means of engaging in day-to-day transactions. The CFPB will continue to monitor developments related to the fees consumers pay for accessing cash, and work with agencies across the federal government to ensure people have fair and meaningful access to the money that underpins our economy.

Table and Figure Notes

Notes for figure 1.

The Federal Reserve Bank of Atlanta’s annual Diary and Survey of Consumer Payment Choice (Survey) tracks consumers’ self-reported payment habits over a three-day period in October using a nationally representative sample. The survey includes a question about whether and how consumers access cash, such as where they made the withdrawal, the source of the cash, and the amount of the withdrawal. Figure 1 provides a percentage of all cash-back withdrawal transactions from a bank account, checking account, or prepaid card reported between 2017 and 2022, by location (ATM, Retail point-of-sale, Bank teller, and Other). The number of observations during this time is 192 transactions. It does not include cash-back transactions made using a credit card cash advance feature or other form of credit.

Notes for Table 1

This model assumes that 80 percent of the merchant transaction cost is due to interchange fees, 15 percent due to network fees, and 5 percent due to payment acquirer fees. It also includes a $0.01 fee for fraud protection. For regulated transactions, the interchange fees are $0.22 + 0.05% of the transaction amount. Regulated transactions are those where the debit card used is issued by a bank with more than $10 billion in assets, and subject to 15 U.S.C. § 1693o-2. Exempt transactions are those not subject to this statutory cap on interchange fees. While Mastercard does not publish its fees for exempt transactions, Visa does. This model uses Visa’s published fees as of October 2023 for card-present transactions: for the National Discount Chain, the fees for Exempt Retail Debit ($0.15 + 0.80%), and for the National Grocery Chain, Exempt Supermarket Debit ($0.30 flat fee). An October 2023 Federal Reserve report on interchange fee revenue found that in 2021, the most recent data available, 56.21 percent of debit transactions were regulated and 43.79 percent were exempt. This composition is reflected in the table.

Notes for Table 2

The storefront counts for each of the retailers come from their websites, last visited on March 28, 2024, or their most recent reports to investors. Fee information was gathered either through publicly available information such as the merchant’s website, and/or verified through the CFPB’s market monitoring activities.

Dollar Tree Inc. announced on March 13, 2024 that it will close 1,000 of its Family Dollar and Dollar Tree brands stores over the course of the year. If those closures occur, Dollar Tree, Inc. will still have over 15,000 storefronts across the country.

In October 2022, Kroger Co. and Albertsons Companies announced their proposal to merge, though on February 26, 2024, the Federal Trade Commission and nine state attorneys general sued to block this proposal, alleging that the deal is anti-competitive. On April 22, 2024, Kroger Co. and Albertsons Companies announced a revised plan in which, if the merger is approved, the combined entity would divest 579 stores to C&S Wholesalers. If the divestiture occurs, the combined entity will still have over 4,400 stores across the country.

Notes for Table 3

See above notes for Figure 1 about the Diary and Survey of Consumer Payment Choice (Survey). Table 3 provides mean and median amounts of ATM and Retail point-of-sale cash withdrawal transactions by income. In the Survey, participants were asked to report the total combined income of all family members over age 15 living in the household during the past 12 months. From these responses, we constructed five income brackets – four of $25,000 each plus a fifth bin for any respondents reporting more than $100,000 in annual household income for each respondent in each year.

See e.g., Jay Lindsay, A Fatal Cash Crash? Conditions Were Ripe for It After the Pandemic Hit, but It Didn’t Happen , Fed. Rsrv. Bank of Boston (Nov. 2, 2023), https://www.bostonfed.org/news-and-events/news/2023/11/cash-crash-pandemic-increasing-credit-card-use-diary-of-consumer-payment-choice.aspx

Kevin Foster, Claire Greene, & Joanna Stavins, The 2023 Survey and Diary of Consumer Payment Choice , Fed. Rsrv Bank of Atlanta (June 2024), https://doi.org/10.29338/rdr2024-01

See e.g., Hilary Allen, Payments Failure, Boston College Law Review, Forthcoming, American University, WCL Research Paper No. 2021- 11, (Feb. 21, 2020) available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3539797

See e.g., Scarlett Heinbuch, Cash Is Critical in Times of Crisis , Fed. Rsrv. Bank of Atlanta (Mar. 7, 2022), https://www.atlantafed.org/blogs/take-on-payments/2022/03/07/cash-in-crisis

See e.g., Carly Page, Square Says It Has Resolved Daylong Outage , TechCrunch, (Sept. 8, 2023), https://techcrunch.com/2023/09/08/square-day-long-outage-resolved/ . See also Caroline Haskins, The Global CrowdStrike Outage Triggered a Surprise Return to Cash , Wired, (July 19, 2024), https://www.wired.com/story/microsoft-crowdstrike-outage-cash/ .

See Berhan Bayeh, Emily Cubides and Shaun O’Brien, 2024 Findings from the Diary of Consumer Payment Choice , Fed. Rsrv. (May 13, 2024), https://www.frbservices.org/binaries/content/assets/crsocms/news/research/2024-diary-of-consumer-payment-choice.pdf (findings related to low-income consumers and older Americans use of cash); Emily Cubides and Shaun O’Brian, 2023 Findings from the Diary of Consumer Payment Choice , Fed. Rsrv., (May 19, 2024), https://www.frbsf.org/cash/wp-content/uploads/sites/7/2023-Findings-from-the-Diary-of-Consumer-Payment-Choice.pdf (findings related to unbanked households use of cash), and Michelle Faviero, , More Americans are Joining the ‘Cashless’ Economy ,” Pew Rsch. Ctr, (Oct. 5, 2022), https://www.pewresearch.org/short-reads/2022/10/05/more-americans-are-joining-the-cashless-economy/ (findings related to use of cash by race and other demographics).

Similarly, the average cash-back withdrawal amount was $33 in 2012, the most recent data available from the Federal Reserve Payments Study. The study was based on self-reported information from financial institutions surveyed by the Federal Reserve. Of the reported transactions, 73 percent were debit cards with an average amount of $33 and 27 percent on general purpose prepaid cards with an average withdrawal amount of $19. 2013 Federal Reserve Payments Study: Recent and Long-Term Payment Trends in the United States: 2003 – 2012 , Fed. Rsrv. Bd. (July 2014), https://www.frbservices.org/binaries/content/assets/crsocms/news/research/2013-fed-res-paymt-study-summary-rpt.pdf

The amounts in the Survey are lower than the average ATM withdrawal amounts reported in 2022 Federal Reserve Payments study, which utilizes data from surveying financial institutions. Per this study, in 2021, the average ATM withdrawal was $198. The Federal Reserve Payments Study: 2022 Triennial Initial Data Release , Fed. Rsrv. Bd. (Apr. 21, 2023), https://www.federalreserve.gov/paymentsystems/fr-payments-study.htm

Claire Green and Oz Shy, How Consumers Get Cash: Evidence from a Diary Survey , Fed. Rsrv. Bank of Atlanta, (Apr. 2019), at 5, https://www.atlantafed.org/-/media/documents/banking/consumer-payments/research-data-reports/2019/05/08/how-consumers-get-cash-evidence-from-a-diary-survey/rdr1901.pdf (finding, “For the largest amounts of cash, respondents mostly turned to employers, with an average dollar value of cash received of $227. At bank tellers and ATMs, consumers also received average dollar values greater than the overall average: $159 and $137, respectively. Consumers received smaller amounts from family or friends ($93) and, notably, cash back at a retail store ($34). All these dollar amounts are weighted. The amount of cash received at a retail store is constrained by the store’s limits, so the amount of cash received in this way is not necessarily at the discretion of the consumer.”)

Neil Mitchell and Ann Ramage, The Second Participant in the Consumer to Business Payments Study , Fed. Rsrv. Bank of Richmond (Sept. 15, 2014), https://www.richmondfed.org/~/media/richmondfedorg/banking/payments_services/understanding_payments/pdf/psg_ck_20141118.pdf

Id. at 8, Figures 7 and 8.

See e.g., Stan Sienkiewicz, The Evolution of EFT Networks from ATMs to New On-Line Debit Payment Products , Discussion Paper, Payment Cards Ctr. of the Fed. Rsrv. Bank of Philadelphia (Apr. 2002), https://www.philadelphiafed.org/-/media/frbp/assets/consumer-finance/discussion-papers/eftnetworks_042002.pdf?la=en&hash=88302801FC98A898AB167AC2F9131CE1 (“The cash back option became popular with supermarket retailers, since store owners recognized savings as a result of less cash to count at the end of the day, a chore that represented a carrying cost to the establishment.”).

These market segments and retailers for purposes of markets analysis are similar to those used in other academic literature related to dollar store locations in the context of food access or impact on other market dynamics, such as on local grocers. See e.g., El Hadi Caoui, Brett Hollenbeck, and Matthew Osbourne, The Impact of Dollar Store Expansion on Local Market Structure and Food Access ,” (June 22, 2022), available at https://ssrn.com/abstract=4163102 (finding "In 2021, there were more of these stores operating than all the Walmarts, CVS, Walgreens, and Targets combined by a large margin.”) and Yue Cao, The Welfare Impact of Dollar Stores ,” available at https://yuecao.dev/assets/pdf/YueCaoDollarStore.pdf (last visited Aug. 23, 2024) (using the categories of dollar stores, groceries, and mass merchandise (such as Walmart) for comparisons across retail segments and noting that dollar stores regard these other segments as competitors).

Estimate based on information voluntarily provided in the CFPB's market monitoring activities.

What Forms of Payment are Accepted? U.S. Postal Serv., https://faq.usps.com/s/article/What-Forms-of-Payment-are-Accepted (last visited Aug. 23, 2024).

See generally, Stacy Mitchell, Kennedy Smith, and Susan Holmberg , The Dollar Store Invasion , Inst. for Local Self Reliance (Mar. 2023), https://cdn.ilsr.org/wp-content/uploads/2023/01/ILSR-Report-The-Dollar-Store-Invasion-2023.pdf . There is also extensive research on dollar store locations in other contexts such as food access and impact on consumer spending habits. El Hadi Caoui, Brett Hollenbeck, and Matthew Osbourne, The Impact of Dollar Store Expansion on Local Market Structure and Food Access ,” at 5, (June 22, 2022), available at https://ssrn.com/abstract=4163102

Dollar General Annual Report (Form10-K) at 7 (Mar. 25. 2024), https://investor.dollargeneral.com/websites/dollargeneral/English/310010/us-sec-filing.html?format=convpdf&secFilingId=003b8c70-dfa4-4f21-bfe7-40e6d8b26f63&shortDesc=Annual%20Report .

Dollar Tree, Inc. Annual Report (Form 10-K) at 7 (Mar. 20. 2024), https://corporate.dollartree.com/investors/sec-filings/content/0000935703-23-000016/0000935703-23-000016.pdf

See e.g., Dollar General Annual Report (Form10-K) at 7 (Mar. 25. 2024) (“We generally locate our stores and plan our merchandise selections to best serve the needs of our core customers, the low and fixed income households often underserved by other retailers, and we are focused on helping them make the most of their spending dollar.” And, Dollar Tree, Inc. Annual Report (Form 10-K) at 6 (Mar. 20. 2024), (“Family Dollar primarily serves a lower than average income customer in urban and rural locations, offering great values on everyday items.”)

Dr. Jerry Shannon, Dollar Stores, Retailer Redlining, and the Metropolitan Geographies of Precarious Consumption , Ann. of the Am. Assoc. of Geographers, Vol. 111, No. 4, 1200-1218 (2021), (analyzing over 29,000 storefront locations of Dollar General, Dollar Tree, and Family Dollar locations across the three largest MSA in each of the nine U.S. Census Bureau-defined divisions.)

Kristen Broady, Mac McComas, and Amine Ouazad, An Analysis of Financial Institutions in Black-Majority Communities: Black Borrowers and Depositors Face Considerable Challenges in Accessing Banking Services ,” Brookings Inst., (Nov. 2, 2021), https://www.brookings.edu/articles/an-analysis-of-financial-institutions-in-black-majority-communities-black-borrowers-and-depositors-face-considerable-challenges-in-accessing-banking-services/ and Drew Dahl and Michelle Franke, Banking Deserts Become a Concern as Branches Dry Up , Fed. Rsrv. Bank of St. Louis, https://www.stlouisfed.org/publications/regional-economist/second-quarter-2017/banking-deserts-become-a-concern-as-branches-dry-up (July 25, 2017).

El Hadi Caoui, Brett Hollenbeck, and Matthew Osbourne, The Impact of Dollar Store Expansion on Local Market Structure and Food Access ,” (June 22, 2022), available at https://ssrn.com/abstract=4163102 .

Dollar Tree Completes Acquisition of Family Dollar , Dollar Tree Inc., (July 6, 2015), available at https://corporate.dollartree.com/news-media/press-releases/detail/120/dollar-tree-completes-acquisition-of-family-dollar

El Hadi Caoui, Brett Hollenbeck, and Matthew Osbourne, The Impact of Dollar Store Expansion on Local Market Structure and Food Access ,” (June 22, 2022), available at https://ssrn.com/abstract=4163102 and Yue Cao, The Welfare Impact of Dollar Stores, https://yuecao.dev/assets/pdf/YueCaoDollarStore.pdf (last visited Aug. 23. 2024).

Evan Moore, Harris Teeter Introduces New Fees that Have Customers Upset. What To Know Before You’re Charged , Charlotte Observer, (Mar. 14, 2024), https://amp.charlotteobserver.com/news/business/article286627340.html

Karen Bennett and Matthew Goldberg, Survey: ATM fees Reach 26-year High While Overdraft Fees Inch Back Up , Bankrate.com (Aug. 21, 2024), https://www.bankrate.com/banking/checking/checking-account-survey/

Oz Shy and Joanna Stavins, Who Is Paying All These Fees? An Empirical Analysis of Bank Account and Credit Card Fees , Fed. Rsrv. Bank of Boston, Working Paper No. 22-18, at Table 2, (Aug. 2022), https://www.bostonfed.org/publications/research-department-working-paper/2022/who-is-paying-all-these-fees-an-empirical-analysis-of-bank-account-and-credit-card-fees .

Enhancing geotechnical zoning through near-surface geophysical surveys: a case study from eastern Agadir, Morocco

  • Original Paper
  • Published: 30 August 2024

Cite this article

case study on financial data analysis

  • Ismaail Khadrouf   ORCID: orcid.org/0000-0002-9164-9846 1 ,
  • Ouafa El Hammoumi 1 ,
  • Najib El Goumi 2 ,
  • Abdessamad El Atillah 3 ,
  • Youssef Raddi 4 &
  • Mostafa Oukassou 1  

The eastern Agadir (Morocco) was selected for the urban expansion. However, it faces challenges owing to its location within an alluvial basin of weak and heterogeneous sediments, compounded by the scarcity of geotechnical data. This study aimed to create the first geotechnical zoning map of the area to support informed urban planning. Geophysical surveys were employed with available in situ investigations to address this data gap and delineate and characterize the main geotechnical zones. The electrical resistivity tomography (ERT) method was used to map the soil distribution horizontally and vertically, complemented by laboratory tests. The multichannel analysis of surface waves (MASW) and seismic refraction tomography (SRT) methods provided insights into important geotechnical and elastic-dynamic parameters. This analysis revealed three distinct geoseismic layers. The surface layer consisted of sand, silt, pebble, weathered limestone, and marlstone, whereas the underlying layer contained compacted silt, dense sand, conglomerate, sandstone, limestone, and marlstone. This layer exhibited higher seismic velocities and lower soil heterogeneity than the surface layer. The third layer, characterized by limestone, marlstone, and compacted deposits, serves as geotechnical bedrock. The V S30 velocities were calculated and classified according to the EUROCODE 8 scheme, which categorizes sites based on their geological characteristics and associated seismic risks. The study area was divided into Class A (rock), Class B (dense soil and soft rock), and Class C (medium dense sand and gravel). This classification is essential for assessing seismic response and designing earthquake-resistant structures. The majority of the sites were categorized as Class B. The final zoning map reveals five distinct geotechnical zones: Tagragra's Dome, the alluvial fans and floodplain, the alluvial terrace, the limestone plateau, and the sand dune zone. The calculated parameters revealed soil heterogeneities in horizontal and vertical directions. These results provide valuable key parameters for informed urban planning, with special attention paid to areas with weak soil during foundation design.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

case study on financial data analysis

Similar content being viewed by others

case study on financial data analysis

Site investigation of soil competence by electrical resistivity and refraction seismic methods at a proposed building—a case study from Nigeria

case study on financial data analysis

Multimethod geophysical characterization at karst sites: a contribution to geotechnical hazard assessment in west Sohag Province, Upper Egypt

case study on financial data analysis

Geohazard characterization of subsurface materials using integrated geophysical methods for post foundation studies: a case study

Data availability.

All relevant data are provided in the supplementary file accompanying this manuscript.

Abudeif AM, Fat-Helbary RE, Mohammed MA et al (2019) Geotechnical engineering evaluation of soil utilizing 2D multichannel analysis of surface waves (MASW) technique in New Akhmim city, Sohag, Upper Egypt. J Afr Earth Sc 157:103512. https://doi.org/10.1016/j.jafrearsci.2019.05.020

Article   Google Scholar  

Aït Hssaine A, Bridgland D (2009) Pliocene-Quaternary fluvial and aeolian records in the Souss Basin, southwest Morocco: a geomorphological model. Glob Planet Change 68:288–296. https://doi.org/10.1016/j.gloplacha.2009.03.002

Akingboye AS (2023) RQD modeling using statistical-assisted SRT with compensated ERT methods: correlations between borehole-based and SRT-based RMQ models. Phys Chem Earth Parts a/b/c 131:103421. https://doi.org/10.1016/j.pce.2023.103421

Al-Heety AJR, Hassouneh M, Abdullah FM (2021) Application of MASW and ERT methods for geotechnical site characterization: a case study for roads construction and infrastructure assessment in Abu Dhabi. UAE J Appl Geophys 193:104408. https://doi.org/10.1016/j.jappgeo.2021.104408

Ambroggi R (1963) Etude géologique du versant méridional du Haut Atlas occidental et de la plaine du Souss. Ed. de la division de la géologie. Rabat

Ayele A, Woldearegay K, Meten M (2022) Multichannel analysis of surface waves (MASW) to estimate the shear wave velocity for engineering characterization of soils at Hawassa Town, Southern Ethiopia. Int J Geophys 2022:1–22. https://doi.org/10.1155/2022/7588306

Bello A, Muztaza NM, Abir IA et al (2022) Geophysical performance of subsurface characterization for site suitability in construction purpose. Phys Chem Earth Parts a/b/c 128:103296. https://doi.org/10.1016/j.pce.2022.103296

Berhane G, Walraevens K (2012) Geological and geotechnical constraints for urban planning and natural environment protection: a case study from Mekelle City, Northern Ethiopia. Environ Earth Sci. https://doi.org/10.1007/s12665-012-1963-x

Bouaakkaz B, El Morjani ZEA, Bouchaou L (2023) Social vulnerability assessment to flood hazard in Souss basin, Morocco. J Afr Earth Sci 198:104774. https://doi.org/10.1016/j.jafrearsci.2022.104774

BSI (2003) Geotechnical investigation and testing. Identification and classification of rock. British Standards Document, BS EN ISO 14689-1. https://doi.org/10.3403/03006990

Calamita G, Gallipoli MR, Gueguen E et al (2023) Integrated geophysical and geological surveys reveal new details of the large Montescaglioso (southern Italy) landslide of December 2013. Eng Geol 313:106984. https://doi.org/10.1016/j.enggeo.2023.106984

Castagna J, Batzle M, Eastwood R (1985) Relationship between compressional and shear-wave velocities in classic silicate rocks. Geophysics 50:571–581. https://doi.org/10.1190/1.1441933

Cherkaoui T-E, Hassani AE (2012) Seismicity and seismic hazard in Morocco 1901–2010, pp 45–55

Dahlin T, Loke MH (1998) Resolution of 2D Wenner resistivity imaging as assessed by numerical modelling. J Appl Geophys 38:237–249

De Sena T, Nola I, Zuquette LV (2021) Procedures of engineering geological mapping applied to urban planning in a data-scarce area: application in southern Brazil. J S Am Earth Sci 107:103141. https://doi.org/10.1016/j.jsames.2020.103141

Dewey J, Helman M, Knott S et al (1989) Kinematics of the western Mediterranean. Geol Soc Lond Spec Publ 45:265–283. https://doi.org/10.1144/GSL.SP.1989.045.01.15

El May M, Souissi D, Said HB, Dlala M (2015) Geotechnical characterization of the quaternary alluvial deposits in Tunis City (Tunisia). J Afr Earth Sc 108:89–100. https://doi.org/10.1016/j.jafrearsci.2015.05.003

El Mrabet T (2005) The great earthquakes in the Maghreb region and their consequences on man and environment. CNRS-LAG report, Rabat, Morocco (in Arabic)

Elmouden A, Alahiane N, Faskaoui M, El Morjani ZEA (2016) Dams siltation and soil erosion in the Souss–Massa River Basin, pp 1–26

Eurocode 8 (2005) Eurocode 8: design of structures for earthquake resistance-part 1: general rules, seismic actions and rules for buildings. European Committee for Standardization, Brussels

Google Scholar  

Failache MF, Zuquette LV (2018) Geological and geotechnical land zoning for potential Hortonian overland flow in a basin in southern Brazil. Eng Geol 246:107–122. https://doi.org/10.1016/j.enggeo.2018.09.032

Gardner GHF, Gardner LW, Gregory AR (1974) Formation velocity and density—the diagnostic basics for stratigraphic traps. Geophysics. https://doi.org/10.1190/1.1440465

GCO (1988) Guide to rock and soil descriptions (Geoguide 3). Geotechnical Control Office, Hong Kong, p 186

Hasan M, Shang Y, Meng H et al (2021) Application of electrical resistivity tomography (ERT) for rock mass quality evaluation. Sci Rep 11:23683. https://doi.org/10.1038/s41598-021-03217-8

Article   CAS   Google Scholar  

Humbert M (1966) Carte géotechnique de Fès (1/20000): Royaume du Maroc, Ministère de l’industrie et des mines, Direction des mines et de la géologie, Division de la géologie, Service d’études des gîtes minéraux

Ishak M, Zolkepli M, Masyhur E et al (2022) Interrelationship between borehole lithology and electrical resistivity for geotechnical site investigation. Phys Chem Earth Parts a/b/c 128:103279. https://doi.org/10.1016/j.pce.2022.103279

Jeannette A (1965) Carte géotechnique de la meseta côtière à l’est de Casablanca (1/50000): Royaume du Maroc, Ministère de l’industrie et des mines, Direction des mines et de la géologie, Division de la géologie, Service d’études des gîtes minéraux

Karastathis VK, Karmis P, Novikova T et al (2010) The contribution of geophysical techniques to site characterisation and liquefaction risk assessment: Case study of Nafplion City, Greece. J Appl Geophys 72:194–211. https://doi.org/10.1016/j.jappgeo.2010.09.003

Khadrouf I, El Hammoumi O, El Goumi N, Oukassou M (2024) Contribution of HVSR, MASW, and geotechnical investigations in seismic microzonation for safe urban extension: A case study in Ghabt Admin (Agadir), western Morocco. J Afr Earth Sc 210:105138

Lin C-P, Lin C-H, Wu P-L et al (2015) Applications and challenges of near surface geophysics in geotechnical engineering. Chin J Geophys 58:2664–2680. https://doi.org/10.6038/cjg20150806

Loke MH, Barker RD (1996) Rapid least-squares inversion of apparent resistivity pseudosections by a quasi-Newton method 1 . Geophys Prospect 44:131–152. https://doi.org/10.1111/j.1365-2478.1996.tb00142.x

Loke MH (2004) Tutorial: 2-D and 3-D electrical imaging surveys, pp 29–31

Malki M, Choukr-Allah R, Bouchaou L, et al (2016) Assessment of groundwater quality: impact of natural and anthropogenic contamination in Souss-Massa River Basin, pp 1–20

Mazéas J-P (1967) Carte géotechnique de Safi (1/50000): Royaume du Maroc, Ministère de l’industrie et des mines, Direction des mines et de la géologie, Division de la géologie, Service d’études des gîtes minéraux

Mohammed MA, Abudeif AM, Abd El-Aal AK (2020) Engineering geotechnical evaluation of soil for foundation purposes using shallow seismic refraction and MASW in 15th Mayo, Egypt. J Afr Earth Sci 162:103721. https://doi.org/10.1016/j.jafrearsci.2019.103721

El Morjani ZEA, Ennasr M, Elmouden A et al (2016) Flood hazard mapping and modeling using GIS applied to the Souss River Watershed, pp 1–37

Mott PH, Dorgan JR, Roland CM (2008) The bulk modulus and Poisson’s ratio of “incompressible” materials. J Sound Vib 312:572–575. https://doi.org/10.1016/j.jsv.2008.01.026

Muztaza NM, Ismail NA, Mohamad ET et al (2022) Seismic refraction assessment for excavatability and volume estimation in Kota Tinggi, Johor, Malaysia. J Appl Geophys 200:104612. https://doi.org/10.1016/j.jappgeo.2022.104612

Parry RHG (1977) Estimating bearing capacity in sand from SPT values. J Geotech Eng Div 103:1014–1019. https://doi.org/10.1061/AJGEB6.0000484

Pegah E, Liu H (2016) Application of near-surface seismic refraction tomography and multichannel analysis of surface waves for geotechnical site characterizations: a case study. Eng Geol 208:100–113. https://doi.org/10.1016/j.enggeo.2016.04.021

RPS (2011) Le reglement de construction parasismique. Ministère de l’ATUHE, Secrétariat d’État à l’Habitat, Rabat, Maroc Report RPS2000

Sébrier M, Siame L, Zouine EM et al (2006) Active tectonics in the Moroccan High Atlas. CR Geosci 338:65–79. https://doi.org/10.1016/j.crte.2005.12.001

Shang Y-J, Yang C-G, Jin W-J et al (2021) Application of integrated geophysical methods for site suitability of research infrastructures (RIs) in China. Appl Sci 11:8666. https://doi.org/10.3390/app11188666

Silva PG, Elez J, Pérez-López R et al (2021) The AD 1755 Lisbon Earthquake-Tsunami: seismic source modelling from the analysis of ESI-07 environmental data. Quat Int. https://doi.org/10.1016/j.quaint.2021.11.006

Tahayt A, Mourabit T, Rigo A et al (2008) Mouvements actuels des blocs tectoniques dans l’arc Bético-Rifain à partir des mesures GPS entre 1999 et 2005. Comptes Rendus Geoscience C R Geosci 340:400–413. https://doi.org/10.1016/j.crte.2008.02.003

Telford WM, Geldart LP, Sheriff RE, Keys DA (1976) Applied geophysics. Cambridge Univ. Press, New York, pp 1–860

Timoulali Y, El Hilali M, Hosny A et al (2022) Joint inversion of receiver functions and surface wave dispersion velocities to investigate the crustal structure of north of Morocco: case of Rif domain. Med Geosc Rev 4:537–554. https://doi.org/10.1007/s42990-022-00084-x

Toksoz MN, Cheng CH, Timur A (1976) Velocities of seismic waves in porous rocks. Geophysics 41:621–645

Uyanık O (2011) The porosity of saturated shallow sediments from seismic compressional and shear wave velocities. J Appl Geophys 73:16–24

Uyanık O (2019) Estimation of the porosity of clay soils using seismic P- and S-wave velocities. J Appl Geophys 170:103832. https://doi.org/10.1016/j.jappgeo.2019.103832

Watkins JS, Walters LA, Godson RH (1972) Dependence of in-situ compressional-wave velocity on porosity in unsaturated rocks. Geophysics 37:29–35. https://doi.org/10.1190/1.1440249

Zaid H, Arifin M, Mohd Nazer NS et al (2022) Geophysical investigation for engineering construction assessment in Karst area. Phys Chem Earth Parts a/b/c 129:103329. https://doi.org/10.1016/j.pce.2022.103329

Download references

Acknowledgements

The authors thank the Editor in Chief, Dr. Attila Çiner, and reviewers for their time and constructive comments towards improving our manuscript.

Author information

Authors and affiliations.

Laboratory of Applied Geology, Geoinformatics and Environment, Department of Geology, Faculty of Sciences Ben M’sick, Hassan II University, Casablanca, Morocco

Ismaail Khadrouf, Ouafa El Hammoumi & Mostafa Oukassou

Georesources Laboratory, Earth Sciences Department, Faculty of Science and Technology, University of Cadi Ayyad, BP549, Marrakech, Morocco

Najib El Goumi

Spaces, Societies, Environment, Planning and Development Laboratory, Department of Geography and Planning, Faculty of Languages, Arts and Human Sciences-Ait Melloul, Ibnou Zohr University, Ait Melloul, Morocco

Abdessamad El Atillah

Department “Sciences de La Terre”, Faculty of Sciences Agdal, Mohamed V University, BP 1014, Rabat Agdal, Morocco

Youssef Raddi

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Ismaail Khadrouf .

Ethics declarations

Conflict of interest.

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file1 (DOCX 46 KB)

Rights and permissions.

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Khadrouf, I., El Hammoumi, O., El Goumi, N. et al. Enhancing geotechnical zoning through near-surface geophysical surveys: a case study from eastern Agadir, Morocco. Med. Geosc. Rev. (2024). https://doi.org/10.1007/s42990-024-00137-3

Download citation

Received : 30 April 2024

Revised : 06 August 2024

Accepted : 07 August 2024

Published : 30 August 2024

DOI : https://doi.org/10.1007/s42990-024-00137-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Eastern Agadir
  • Near-surface geophysical surveys
  • Geotechnical zoning map
  • Find a journal
  • Publish with us
  • Track your research

JMS

A Exploring the Nexus of Financial Incentives and Employee Motivation in Financial Sector: A study of Pakistan

  • Dr. Ahsan-ul Haque Shaikh

The main objective of this study is to find out the financial incentive factors behind employee motivation, especially in financial institutions in, Pakistan. Salary, housing allowance, and medical insurance were independent variables, and employee motivation was dependent. The sample size was 300, only collected from 190 financial institutions in Pakistan. Questionnaires collect primary data via Google Forms. We used a random sampling method for sample selection. Correlation analysis suggests that all variables have a strong positive correlation (r>0.70)—regression analysis is used to check the effect of financial incentives on employee motivation. The findings suggested that salary, housing allowance, and medical insurance have positive statistical significance (p<0.05) on employee motivation. This research supports that all organizations should focus on financial rewards to employees' motivations for better organizational performance.

      Keywords : financial incentives, financial sector, employee motivation.

Additional Files

Creative Commons License

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License .

jmsrightsidebar

Ethics and Malpractice Statement

Volume Special Editions

Guidelines For Authors

Privacy Policy

More information about the publishing system, Platform and Workflow by OJS/PKP.

COMMENTS

  1. 10 Financial Analytics Case Studies [2024]

    10 Financial Analytics Case Studies. 1. Risk Management in Banking Sector: JPMorgan Chase & Co. JPMorgan Chase & Co. has harnessed the power of big data analytics and machine learning to revolutionize its approach to risk management. The bank's use of advanced algorithms enables the analysis of vast datasets, identifying subtle patterns of ...

  2. Financial analysis

    Finance & Accounting Case Study. ... Provides a framework that helps explain these real-world observations about accounting and financial statement analysis. ... Here's a way to crunch the data ...

  3. Financial Statements Examples

    The first of our financial statements examples is the cash flow statement. The cash flow statement shows the changes in a company's cash position during a fiscal period. The cash flow statement uses the net income figure from the income statement and adjusts it for non-cash expenses. This is done to find the change in cash from the beginning ...

  4. 10 Real World Data Science Case Studies Projects with Example

    A case study in data science is an in-depth analysis of a real-world problem using data-driven approaches. It involves collecting, cleaning, and analyzing data to extract insights and solve challenges, offering practical insights into how data science techniques can address complex issues across various industries.

  5. Examples of Financial Analysis

    Table of contents. Financial Analysis Examples. Top 4 Financial Statement Analysis Examples. Example #1 - Liquidity Ratios. Current Ratio. Quick Ratio. Example #2 - Profitability Ratios. Operating Profitability Ratio. Net Profit Ratio.

  6. Financial Case Study Analysis

    When evaluating potential risks in the financial case study, consider the following: Risk Assessment: Begin by conducting a thorough risk assessment to identify all potential threats to the financial analysis process. This includes market risks, regulatory risks, and operational risks that could impact the outcomes.

  7. 6 Data Analytics Use Cases in Banking and Financial Services

    Advanced Analytics in BFSI - Benefits. Updating the data analytics use cases in banking and financial services with the evolving data science methodologies can help organizations sustain stronger customer relationships. Let us look at a few more benefits of advanced analytics. Customer 360-degree insights - By leveraging advanced analytics ...

  8. Amazon Case Study I Financial Modeling Course I CFI

    Amazon (AMZN) Case Study. This course is built on a case study of Amazon, where students are tasked with building a financial modeling and performing comparable company analysis to value AMZN shares and make an investment recommendation. Through the course of the transaction, students will learn: How to build a detailed financial forecast of Amazon

  9. PDF A Handbook of Case Studies in Finance

    Tarika Sikarwar. A Handbook of Case Studies in Finance. By Tarika Sikarwar. This book first published 2017. Cambridge Scholars Publishing. Lady Stephenson Library, Newcastle upon Tyne, NE6 2PA, UK. British Library Cataloguing in Publication Data. A catalogue record for this book is available from the British Library.

  10. Case Studies

    This chapter presents four case studies which provide examples of financial information upon which liquidity, leverage, profitability, and causal calculations may be performed. The first two case studies also contain example ratio summary and analysis. Two discussion cases are also provided, followed by questions related to the financial ...

  11. Case Study Method: A Step-by-Step Guide for Business Researchers

    Case study protocol is a formal document capturing the entire set of procedures involved in the collection of empirical material . It extends direction to researchers for gathering evidences, empirical material analysis, and case study reporting . This section includes a step-by-step guide that is used for the execution of the actual study.

  12. Introduction to Financial Statement Analysis

    Introduction. Financial analysis is the process of examining a company's performance in the context of its industry and economic environment in order to arrive at a decision or recommendation. Often, the decisions and recommendations addressed by financial analysts pertain to providing capital to companies—specifically, whether to invest in ...

  13. Data Science in Finance: The Top 9 Use Cases

    Blockchain and cryptocurrency, mobile payment platforms, analytics-driven trading apps, lending software, and AI-based insurance products are just a few examples of fintech that is driven by data science. 9. General data management. As mentioned, financial institutions have access to huge amounts of data.

  14. Financial Analysis: Definition, Importance, Types, and Examples

    Financial analysis is the process of evaluating businesses, projects, budgets, and other finance-related transactions to determine their performance and suitability. Typically, financial analysis ...

  15. Exploring Ratio Analysis Through Real-Life Case Studies

    Ratio analysis case studies provide actionable insights and practical applications for businesses and investors. Learning from these real-life examples empowers stakeholders to make informed decisions based on a thorough understanding of financial ratios. Introduction: Ratio analysis is a powerful tool in financial analysis, providing insights ...

  16. Financial Markets: Articles, Research, & Case Studies on Financial

    by Carolin E. Pflueger, Emil Siriwardane, and Adi Sunderam. This paper sheds new light on connections between financial markets and the macroeconomy. It shows that investors' appetite for risk—revealed by common movements in the pricing of volatile securities—helps determine economic outcomes and real interest rates.

  17. (PDF) Data Analysis in Finance Management

    Abstract. Data analysis has become a cornerstone in the realm of finance management, transforming the way financial decisions are made and strategies are formulated. In an era where information is ...

  18. Case Study Methods and Examples

    The purpose of case study research is twofold: (1) to provide descriptive information and (2) to suggest theoretical relevance. Rich description enables an in-depth or sharpened understanding of the case. It is unique given one characteristic: case studies draw from more than one data source. Case studies are inherently multimodal or mixed ...

  19. Financial Statement Analysis

    There are 4 modules in this course. In the final course of this certificate, you will apply your skills towards financial statement analysis. If you have the foundational concepts of accounting under your belt, you are ready to put them into action in this course. Here, you will learn how to reconcile different types of accounts, check for ...

  20. Google Data Analytics Capstone: Complete a Case Study

    There are 4 modules in this course. This course is the eighth and final course in the Google Data Analytics Certificate. You'll have the opportunity to complete a case study, which will help prepare you for your data analytics job hunt. Case studies are commonly used by employers to assess analytical skills. For your case study, you'll ...

  21. Financial Big Data Analysis and Early Warning Platform: A Case Study

    In order to keep the bottom line of systemic financial risks and prevent the mitigation of major risks, this work focuses on the investigation of multi-source heterogeneous data fusion algorithms and cleaning technologies to establish a suitable style for data analysis and big data computation frame. In this paper, according to the above method, we provide the basis for early analysis of ...

  22. Quantitative Data Analysis. A Complete Guide [2024]

    From political polls to consumer surveys, quantitative data analysis techniques like weighting, sampling, and survey data adjustment are critical. Researchers employ methods like factor analysis, cluster analysis, and structural equation modeling. Case Studies Case Study 1: Netflix's Data-Driven Recommendations

  23. Analysis of Financial Data in Case Studies: A Guide for Management and

    Data source: Generalised experience Topics: Financial analysis; Ratio analysis of accounts; ... who need to provide a degree of financial analysis as part of a case study exercise, whose main concern is with management strategy. The guide looks not only at ratio analysis - sometimes the only tool of financial analysis used - but also at other ...

  24. Methodologic and Data-Analysis Triangulation in Case Studies: A Scoping

    15-17,20,22 Three studies described the cross-case analysis using qualitative data. Two studies reported a combination of qualitative and quantitative data for the cross-case analysis. In each multiple-case study, the individual cases were contrasted to identify the differences and similarities between the cases.

  25. Comparative case study on NAMs: towards enhancing specific ...

    A comprehensive comparison of early cellular responses with data from in vivo studies revealed that transcriptomics outperformed targeted protein analysis, correctly predicting up to 50% of in vivo effects. ... (2018) Adverse Outcome Pathway-Driven Analysis of Liver Steatosis in Vitro: A Case Study with Cyproconazole. Chem Res Toxicol 31(8):784 ...

  26. Spatial analysis of the impact of urban built environment on

    The built environment, as a critical factor influencing residents' cardiovascular health, has a significant potential impact on the incidence of cardiovascular diseases (CVDs). Taking Xixiangtang District in Nanning City, Guangxi Zhuang Autonomous Region of China as a case study, we utilized the geographic location information of CVD patients, detailed road network data, and urban points of ...

  27. Unsupervised meta-analysis on chemical elements and atomic energy

    The arrangement is a crucial step in data analysis; it comprises object groups of a data set into homogeneous classes . Within the present study, I utilized data mining methods to estimate the potential forecast of the chemical components. Data mining emerged in the 1990s to extract information from large databases . PCA's core concept is to ...

  28. Cash-back Fees

    The amounts in the Survey are lower than the average ATM withdrawal amounts reported in 2022 Federal Reserve Payments study, which utilizes data from surveying financial institutions. Per this study, in 2021, the average ATM withdrawal was $198. The Federal Reserve Payments Study: 2022 Triennial Initial Data Release, Fed. Rsrv. Bd. (Apr. 21 ...

  29. Enhancing geotechnical zoning through near-surface ...

    The eastern Agadir (Morocco) was selected for the urban expansion. However, it faces challenges owing to its location within an alluvial basin of weak and heterogeneous sediments, compounded by the scarcity of geotechnical data. This study aimed to create the first geotechnical zoning map of the area to support informed urban planning. Geophysical surveys were employed with available in situ ...

  30. A Exploring the Nexus of Financial Incentives and Employee Motivation

    Abstract The main objective of this study is to find out the financial incentive factors behind employee motivation, especially in financial institutions in, Pakistan. Salary, housing allowance, and medical insurance were independent variables, and employee motivation was dependent. The sample size was 300, only collected from 190 financial institutions in Pakistan.