Start Your 14-Day Trial

Create Your Account: Step 1 of 2

Kajabi Google Auth

Get your business up and running

Expand your offering and grow your audience

Scale your business and your team with advanced features

Get free expert insights and tips to grow your knowledge business sent right to your inbox.

case study benchmark analysis

What is benchmarking, and how to do a competitive analysis

Kajabi Hero Live

Benchmarking is a complicated process that many large companies use to perfect their best practices and to outpace the competition. However, that doesn’t mean you can’t use benchmarking for your small online business.

In fact, you should use benchmarking, especially if you’re in the Knowledge Commerce market .

What is benchmarking? We’re going to get to that a little later, but for now, we want to paint a quick picture.

Let’s say that you’re a personal development expert who specializes in productivity. You know how to motivate people to get things done and achieve their goals.

You’ve created four online courses, two e-books, and a host of other digital products on productivity. Sales have been fairly steady over the last 12 months, but you’re not growing at the pace you expected.

Maybe your sales have stagnated. Perhaps you’ve already reached your target audience.

Whatever the case, it’s time to change things up.

If you want to maximize your revenue and brand awareness, you need benchmarking to figure out what’s working, what’s not working, and what your competitors are doing.

It sounds simple enough, but you need a clear, actionable strategy if you want benchmarking to yield positive results. That’s what we’re going to help you achieve today.

What Is Benchmarking? 

case study benchmark analysis

Benchmarking is the process of determining the best processes, strategies, and techniques for achieving your business goals.

That sounds simple enough, right? You’re just trying to create the best business possible.

The problem is that you sometimes need outside data to measure success. While your online business might be more profitable than it was six months ago, is it profitable enough to stand up to your competitors?

We’ll talk about competitive analysis later, but keep in mind that you need to maintain one eye on your competitors at all times. If you don’t, they can easily steal your customers and your profits.

With benchmarking, you want to start with your primary weaknesses.

Do you struggle with lead generation? Have you struggled to retain customers? Do you get complaints about your online course content ?

Sometimes it’s tough to view your business objectively. If you need to bring on a third party to help, do so.

The important thing, though, is to find ways in which your business could improve. Maybe it’s customer service, pricing, advertising , digital marketing , or something else entirely.

Once you’ve identified your first focal point for benchmarking, figure out how your current process works. For instance, when it comes to digital marketing, maybe you focus on blogging , social media, and organic SEO.

Since you’ve discovered that your digital marketing strategy hasn’t yielded the results you expected, you now need to figure out how to tweak your process to make it more effective.

Maybe you need to add email marketing to your repertoire . That’s just one possibility — benchmarking will help you figure out other potential solutions.

Those are the basics of benchmarking, but we’re going to get into even more detail so you understand why benchmarking matters, how you can use it strategically, and what other businesses like yours have accomplished through benchmarking.

Why Is Benchmarking Important for Your Knowledge Commerce Business? 

case study benchmark analysis

Think of benchmarking as a report card, a guidebook, and a competition comparison all in one. Conducting benchmarking can tell you what you're doing wrong with your online business and help you discover ways to improve upon past efforts.

In Knowledge Commerce, competition will always exist.

After all, we're talking about a $243 billion industry. That number is nothing to sneeze at.

If you want to grab for yourself a significant piece of this lucrative pie, you need to use every tool at your disposal to turn your business into an industry powerhouse. Otherwise, your competitors will do it instead.

Furthermore, since your business is likely not a member of the Fortune 500 list, benchmarking won't take as much time, energy, earn money. You have fewer factors to take into consideration so the process will move much faster.

If you're not sure why benchmarking will benefit your business, consider your last few months of sales. What if you could increase them by a significant margin? Would that make benchmarking worth your while?

You can gain a remarkable competitive edge if you're able to understand your competitors as well as businesses like your own outside your industry and figure out what they're doing better than you.

However, it's not about copying your competition. Rather, you want to surpass your competition through improved processes and internal policies.

Maybe you run your business by yourself. That's fine. You still have specific protocols you follow when it comes to marketing, sales, customer service, product creation, and everything else.

What Other Online Course Makers Should You Benchmark Against? 

case study benchmark analysis

The first step in benchmarking is defined other businesses against which to measure your own. Think of these other businesses as yardsticks. They offer the measurements into data that you want to surpass.

Start with other Knowledge Commerce professionals to create online courses and other digital products within your industry. They don't have to offer courses that are exactly like your own, but they should create similar content.

Next, venture beyond your specific industry and find other businesses that are similar in size and revenue to yours. They might operate in completely different ways, but they can still teach you how to improve your processes, marketing techniques, revenue-generation methods, and more.

Once you have found competitors against which you can benchmark your own business, conduct as much research as you can on those businesses. Find out how they're targeting potential customers, managing customer service, and otherwise serving their target audience.

You might discover that one or more of the businesses you chose don't provide a reasonable benchmark for your company. That's okay.

Simply remove those businesses from your benchmarking study and choose other businesses to take their place. You don't want to spend time and energy on a project that won't yield actionable data.

What Are the Benefits of Benchmarking?

case study benchmark analysis

The benefits of benchmarking are quite diverse. If you conduct a little research, you'll find that some of the biggest brands in the world use benchmarking studies to improve their businesses and to find new ways to solve old problems.

First of all, benchmarking is a completely objective process. You're evaluating data and other critical information to determine how you stack up against your competition.

Your passionate about your business, which means that you might let your subjective opinion influence your decisions. Benchmarking removes the subjectivity and allows you to approach problems logically.

You can also use benchmarking to find new opportunities. During your research, you might find that your competitors or other benchmarking partners are using strategies that you never considered.

That happens to every business. You can't always think of every potential solution to a problem or technique to achieve a goal.

You might also have made assumptions about your business. Benchmarking allows you to either validate or invalidate those assumptions.

Obviously, you'd rather validate an assumption and know that you are right. That might happen. But even if you discover that your assumption was incorrect, you now have a way to correct the issue.

By the end of your benchmarking process, you'll have discovered new ways to approach a business and set goals for the future. That's a powerful advantage over your competition.

Once you've set those goals, you can proactively pursue them.

Every business needs performance expectations. In larger companies, for example, managers set performance expectations for their employees.

If you work alone, you need to set performance expectations for yourself. How many new digital products you want to release in the next 12 months? How much revenue do you want to generate in the next four years?

Benchmarking can help you figure out what expectations are reasonable so that you can set out to meet them.

How Does Benchmarking Relate to Competitor Research? 

case study benchmark analysis

Benchmarking and competitor research are often misunderstood. They are actually two separate ways to approach issues within a business, but benchmarking offers several clear advantages over competitor research.

For one thing, benchmarking focuses on the future. It's not a way to find a quick fix for a temporary problem. You're looking for long-term strategies to help your business grow.

During competitive research might show you how to mirror your competition's approach, but it won't help you surpass their results. Additionally, competitor research usually involves spying on the competition and making assumptions about their approaches to certain problems or solutions. Benchmarking takes the opposite approach.

You're looking for companies and individuals who have achieved the kind of results that you want to make possible for your business. That's why it's helpful to find non-direct competitors because you might want to contact them directly.

During a benchmarking study, you might read case studies and articles written by the companies who have set the benchmarks that you want to emulate. It's not a covert operation — it's a way to take your business to the next level.

What Is the Benchmarking Process? 

case study benchmark analysis

The benchmarking process can vary depending on your specific goals and the size of your business. The Fortune 100 company, for example, might spend years on benchmarking in different departments and on different policies and procedures.

For small businesses, though, benchmarking doesn't have to take as much time thinking while fewer steps. Regardless of the size of your business, however, you need to observe at least these steps if you want your benchmarking study to prove effective.

1. Analyze Internal Process 

Start by taking a hard look at your own internal processes. How do you acquire new customers? How do you nurture leads? How do you retain customers and incentivize them to buy more of your products?

Asking these questions can help businesses in the Knowledge Commerce market to find ways to reach their target audiences.

You might also want to ask questions about your internal approach to creating new products. How long does it take you to create an online course from start to finish? What have the reviews been like? How does your course content compared to those of your competitors?

For every process, procedure, or technique that you want to benchmark, you need a comprehensive understanding of your current efforts and methodologies.

2. Decide How Your Benchmarking Study Will Proceed 

Next, you need to create a game plan for your benchmarking study. You've already identified your competitors and other companies against which you want to measure your own, but now you need to decide how you will approach each benchmark.

Maybe you will read case studies, interview executives for owners of other businesses, conduct online research about web traffic and other digital marketing metrics, or something else entirely.

3. Decide How You’ll Measure Success 

There are lots of ways to measure success, especially when it comes to online marketing. For example, one business might find that increased web traffic corresponds directly with increased sales.

Another company, perhaps even in the same industry, might notice that high rates of customer retention provide more revenue than high levels of customer acquisition.

With this in mind, you need to decide how you will measure success. What metrics are your benchmarking partners using to facilitate their own success, and how will you apply those same metrics in measuring your progress?

4. Research and Gather Evidence 

This is perhaps the most important part of the benchmarking process. During this stage, you are gathering evidence and data to support your hypothesis about your benchmarking partners.

As mentioned above, you can gather information through a variety of sources. Case studies, interviews, data analysis, and other sources can help you paint a rich picture of your benchmarking partners' success.

5. Apply What You’ve Learned 

Now that you know the steps you need to take to bring your business to the next level, you can begin implementing policies and procedures that will help you grow your online business and sell more Knowledge Commerce products.

Don't forget that this can be a long-term strategy. Don't expect results in the first few days after you implement changes.

6. Test and Regroup 

It's important to note that what works for one company might not work for another. Even if a company has set the benchmark for a particular process or technique, it might not yield the same results for you.

We mentioned above that benchmarking and competition research are two different animals. This is one reason.

During competition research, you might decide to steal a strategy that has worked well for your competitor. But that's the end of the game. In benchmarking, your goal is to apply strategies that you believe will help elevate your business, then make changes based on the results you achieve.

What Type of Benchmarking Should You Use? 

Believe it or not, there are several different types of benchmarking that you might use for your online business. Several of them, such as international benchmarking and internal benchmarking, are more suited for larger companies. However, the best benchmarking strategies for smaller businesses in the Knowledge Commerce market are competitive benchmarking and strategic benchmarking.

Competitive benchmarking involves studying another best practices for a particular strategy or technique and adopting it for your business. Meanwhile, internal benchmarking involves transferring a strategy you already use to another aspect of your business.

For instance, you could use internal benchmarking to expand your business into a new social media realm. Maybe you've only used Facebook and Twitter in the past, but you're interested in developing a presence on Instagram.

Using internal benchmarking, you can evaluate the best strategies that have worked on Facebook and Twitter, then adopt them for Instagram.

Instagram is a completely different social media platform, which means that you will continually have to tweak your approach until you find the best strategy for it.

You can use competitive benchmarking to apply someone else's approached Instagram to your own business. You might have seen a case study or other documentation online that matches what you want to achieve through social media.

How Can You Conduct a Competitive Analysis?  

Benchmarking often begins with competitive analysis. You want to know what your competitors are doing so that you can do it better.

Competitor research is different from a competitive analysis because you're digging deeper into your competitors' overall strategies and finding ways to surpass them with your own business.

Ideally, you'll collect as much information about your competitors as possible. You want to know how they attract new leads, how they nurture leads through the sales funnel, how they maintain their online presence, and what share of the market they possess.

What competitors should you analyze?

It depends on your goals. If you want to boost brand loyalty and awareness, consider conducting a competitive analysis on an industry leader. You're looking for a competitor who has a large share of the market and who has a powerful voice online.

To put this into context, consider your friendly neighborhood grocery store. It is a small establishment that caters to only a few loyal customers. If that grocer wanted to conduct a competitive analysis, it might focus on a large brand, such as Kroger.

Even though you might conduct competitive analyses on brands that are much larger than yours, you can still learn from the data you collect and use it to help your own business become a stronger competitor.

How do you find this information?

You start by investigating them online. You can use a tool like SEMrush to collect data on your competitors. The type of data you collect will depend on the benchmarks in which you are most interested.

For instance, if you're interested and advertising your online courses, membership site , or other digital products, you might want to know how much your competitors are spending on advertising. You can then decide your budget for advertising based on the ROI you anticipate.

SEMrush also provides data about your competitors' web traffic, keyword rankings, and more.You can even set up brand monitoring to find out how often and in what context your competitors are mentioned online.

You can benchmark for many different channels, for email marketing and social media to organic web traffic and paid traffic. You might also want to track earned media if you're interested in improving brand awareness for your online business.

When conducting a competitive analysis, you can focus on channels in which you are already engaged or on channels that you would like to pursue in the future. Either way, you're looking for data that can help your business grow over the long haul.

This is why competitive analyses can often last for months or even years. You're not just conducting one-time competitor research; instead, you're looking for trends that stays static over long periods of time.

Interpreting The Results 

After you have conducted your competitive analysis, you need to render the data in a way that makes sense to you and, if applicable, your team. You can use charts, graphs, spreadsheets, or any other tool that allows you to visualize the data and to draw conclusions from it.

For instance, let's say that you are conducting a competitive analysis to determine how often you should blog. You discover through the competitive analysis that your three main competitors blog at least once per day. Additionally, you find that each of your competitors' blog posts get at least three times the engagement that your blog posts get.

From that data, you can infer a few things. For one thing, you might want to blog more often or write longer, more detailed blog posts distill traffic away from your competitors. Additionally, you might want to incentivize your audience to engage with your blog posts, such as through commenting and social shares.

You might want to further investigate the types of blog posts that your competitors share. What topics to discuss? What CTAs do they use? What types of engagement are most common?

Over time, as you evolve your approach to blogging, you'll continually compare your results against those of your competitors. If you apply what you've learned effectively, you should see improvements in your results.

Benchmarking Examples 

To help you get the most out of marking, let's look at a couple of fictional examples that you might use to model your own competitive analysis on.

Web Traffic 

Maybe you are interested in attracting more traffic to your Kajabi website. More traffic doesn't automatically result in more sales, but it will increase brand awareness and increase the chances of converting more people.

Web traffic, in this case, is your benchmark. You just have to tie that goal to a specific competitor or to a best practice in your industry.

For example, maybe you know that your closest competitor gets four times more traffic than you do. Now it's time to figure out why.

Investigate that competitor's website content, social media activity, organic reach on the search engines, and other data that might contribute to increased web traffic. You'll likely begin to see patterns that you can interpret and use for your own business.

Email Marketing 

Maybe you have decided to start an email marketing campaign. Good for you. This type of marketing campaign can result in high ROI and limited expense.

First, you need a benchmark . You might find a case study that indicates that the average Knowledge Commerce business has 5,000 email subscribers, for example.

You don't just want 5,000 email subscribers, though. You want to surpass that number so that your business becomes more powerful and you reach more people.

We mentioned above the competitor research often involves attempting to mirror the competitors. That's not what we want here. We're looking for ways to make sure your business performs better than others in your category.

If there is a dearth of information on email marketing in the Knowledge Commerce marketplace, you might have to look outside your industry to find strategies that you might use an email marketing. For instance, you might use data from online marketers to understand the best practices for improving email marketing.

Getting the Maximum Value for Your Business 

case study benchmark analysis

Benchmarking has one goal: Make the most of every decision regarding your business. In other words, you don't want to waste time on marketing efforts, sales strategies, and product launches that won't result in high ROI.

During a benchmarking study, you get to know your competitors extremely well and you learn what businesses in other industries are doing to capture more market share and to improve revenue.

Both types of information can prove extremely valuable to your business.

Start by learning as much as you can about other Knowledge Commerce players in your industry. How much of the market share do they capture? What are they doing that you don't do? What can you do to make your business more attractive than theirs?

Set goals for your benchmarking study and decide what channels you want to investigate. If you're not interested in social media, there's no reason to collect data about it.

In Knowledge Commerce, data has become increasingly valuable across the board. The more you know about your competitors and about strategies that work for other businesses, the more equipped you become to compete.

Use Kajabi to Turn Your Knowledge and Content Into Products You Can Sell 

If you haven't started an online business, or if you're struggling in the Knowledge Commerce marketing, you might need a new solution to carry your business to the next level. That's exactly what Kajabi provides.

Sign up for a free trial so you can explore the Kajabi platform from the inside. Learn how you can set up an email marketing campaign, a CRM, sales pages , landing pages, and dozens of other assets.

We believe in simplifying the Knowledge Commerce market. Instead of wrangling dozens of tools from third parties, you can get everything you need under one umbrella.

Once you've set up your business, you can use benchmarking to improve your processes, strategies, and techniques. Doing so will help your business grow faster.

What is benchmarking? It's a powerful strategy that allows you to compare your business to your competitors and to find better ways to solve existing problems. It might even open your eyes to strategies you've never considered.

Why benchmarking? It's always a good idea to know what your competitors are up to. Additionally, you don't want to miss the opportunity to gather more of the market share and to improve brand awareness.

You'll start by analyzing your own internal processes. What's working and what's not? Then you'll decide on your benchmarking strategy. What channels will you analyze? How will you evaluate the data?

After you’ve made those decisions, it's time to gather evidence and to research your competitors. Use graphs, charts, and other visual representations of the data you collect.

You'll then find yourself ready to apply what you learned, to test the new strategies you've implemented, and to regroup and make changes when necessary.

Conducting a competitive analysis can help with your benchmarking study and provide more insight into your competitors. Use our fictional benchmarking examples the set up your own study today.

Find more blog posts by category:

Create Your Product Build Your Business Grow Your Business Kajabi News Kajabi Hero Stories

Similar Blog Posts

case study benchmark analysis

36 marketing strategies & tactics to grow your business

case study benchmark analysis

Create your B2B ideal customer profile

case study benchmark analysis

Why online course sellers should share case studies

What is benchmarking, and how to do a competitive analysis

What are you waiting for? Develop your one page business plan with our lean canvas template so you can see your success mapped out.

Choosing Your Niche

Download our FREE niche worksheet to discover how you can make a profitable business.

Digital Product Idea Validation 101

Use our FREE idea validation worksheet to identify your ideal customer and the solutions you can offer to make money.

AI Prompt Playbook

Download free AI prompts for social media, email marketing, website copy, and digital products to 10x your productivity!

Monetize Your Following

Got followers? Download our free guide on how you can turn your social media following into paying customers.

The Perfect Sales Page

Get your free worksheet to help you write the perfect sales landing page to convert leads into paying customers.

Email Performance Tracking Sheet

No email marketing experience? No problem. Download our free performance tracking sheet so you can refine your strategy.

case study benchmark analysis

Understanding Benchmarking Analysis: A Step-by-Step Guide

Brown and Gray Bearded Dragon on Brown Tree Branch

So, you've heard a lot about benchmarking analysis lately, but what on earth does it actually mean? Don't worry, you're not alone. This peculiar term may sound like it belongs in an engineer's guidebook, but it's actually a powerful tool used by businesses to gain a competitive edge.

Whether you're a business owner looking to outperform the competition or just a curious individual eager to delve into the world of strategic analysis, this step-by-step guide will take you on a journey to understanding benchmarking analysis in the most humanly comprehensible way possible. So, fasten your seatbelts and get ready to unlock the secrets behind benchmarking analysis as we explore this fascinating concept together.

What is Benchmarking Analysis?

Definition of benchmarking analysis.

Benchmarking analysis is a systematic process used to compare and evaluate an organization's performance against industry standards or best practices. It involves identifying areas for improvement, selecting benchmarking partners, collecting and analyzing relevant data, and implementing improvements based on the findings.

For example, a retail company can compare its customer satisfaction scores with industry benchmarks to identify areas where they need to enhance their service quality.

Importance of Benchmarking Analysis

The importance of benchmarking analysis in any business strategy lies in its ability to provide valuable insights into industry trends, best practices, and areas for improvement. By comparing key performance metrics with those of competitors or industry leaders, organizations can identify opportunities to optimize processes, enhance efficiency, and gain a competitive edge .

For instance, benchmarking analysis can help uncover innovative marketing strategies, cost-saving measures, or operational efficiencies that have proven successful for others in the industry. It enables businesses to stay ahead of the curve and continuously evolve by adopting proven strategies and adapting them for their own unique circumstances.

Real-Life Examples of Benchmarking Analysis

Real-life examples of benchmarking analysis can provide valuable insights into its practical application. For instance, a manufacturing company can benchmark its production processes against industry leaders to identify areas of improvement. Similarly, a retail business can compare its customer service metrics with those of top-performing competitors to enhance the overall customer experience. Benchmarking analysis can also be used in the healthcare sector to compare patient outcomes and treatment effectiveness across different hospitals. By examining real-world benchmarks, organizations can gather actionable insights and implement strategies to drive performance improvements and stay competitive in the market.

The Benchmarking Process

Step 1: identify areas for benchmarking.

To kickstart the benchmarking process, the first step is to identify the specific areas or processes in your organization that you want to benchmark against industry standards or top performers. This involves carefully assessing your business operations and determining the key performance indicators (KPIs) that are critical to your success.

For example, if you're a manufacturing company, you may choose to benchmark your production efficiency, product quality, or time-to-market against industry leaders. By pinpointing the areas for benchmarking, you can focus your efforts and resources on improving those aspects that have the greatest potential for impact and competitive advantage.

Step 2: Identify Benchmarking Partners

  • Look for organizations in your industry that excel in the areas you are benchmarking.
  • Seek out companies that have a similar size, market presence, and customer base to ensure relevance.
  • Consider partnering with organizations outside your industry for fresh perspectives and innovative ideas.
  • Utilize industry conferences, research reports, and professional networks to identify potential benchmarking partners.
  • Look for organizations that are known for their best practices and have a strong track record of success.
  • It is essential to approach benchmarking partners with a collaborative mindset and willingness to share information.

Step 3: Collect and Analyze Data

--Step 3: Collect and Analyze Data--

To conduct an effective benchmarking analysis, meticulous data collection and analysis are imperative. Begin by identifying relevant key performance indicators that align with your objectives. Gather quantitative and qualitative data from various sources, such as financial reports, customer surveys, and industry publications. Analyze the data to identify performance gaps and areas of improvement. Use statistical techniques to compare your metrics against industry benchmarks and top performers.

For example, compare your customer satisfaction scores to those of your competitors to understand your competitive standing. This data-driven analysis will provide valuable insights for implementing targeted improvements and driving performance enhancements.

Step 4: Compare and Evaluate Performance

Once you have collected and analyzed the benchmarking data, it is time to compare and evaluate your performance against the benchmarking partners. Look for gaps, similarities, and areas of improvement. Identify the best practices that lead to superior performance and see how you measure up.

For example, if you are benchmarking your customer service department, compare metrics like response time, customer satisfaction scores, and complaint resolution rates. If you find that your response time is slower compared to top performers in the industry, it may indicate a need for process optimization or resource allocation.

By evaluating performance in a comparative context, you can pinpoint strengths and weaknesses to guide your improvement efforts effectively.

Step 5: Implement Improvements

Once you have compared and evaluated performance using benchmarking analysis, the next step is to implement improvements based on the findings. This is where the real value of benchmarking analysis comes into play. Start by identifying best practices and strategies used by the benchmarking partners that are applicable to your own organization. Then, develop an action plan and communicate it to the relevant stakeholders. Implement changes gradually and monitor the impact to ensure effectiveness.

For example, if you find that a benchmarking partner has a more efficient customer service process, you can consider implementing similar streamlined procedures in your own organization. Continuous evaluation and adjustment are crucial to successful implementation.

Types of Benchmarking Analysis

Internal benchmarking.

h2. Internal Benchmarking

Internal benchmarking involves comparing performance metrics and practices within different departments or divisions within the same organization. It allows companies to identify areas of improvement and best practices that can be shared across teams. For example, the marketing department can analyze the success metrics of different advertising campaigns run by various teams to determine which strategies yielded the best results.

By leveraging internal benchmarking, companies can foster collaboration, enhance efficiency, and identify opportunities for process optimization within their own organization. This approach encourages cross-functional learning and drives continuous improvement by implementing successful strategies across departments. Internal benchmarking is a cost-effective method for organizations to improve performance and achieve operational excellence.

Competitive Benchmarking

Competitive benchmarking involves comparing your company's performance against direct competitors in the industry. It helps identify areas where your organization can improve and gain a competitive advantage. By analyzing competitors' strategies, processes, and outcomes, you can uncover best practices and innovative approaches to implement in your own operations.

For example, examining how competitors handle customer service can inspire improvements in your own customer support processes. Competitive benchmarking allows you to stay informed about industry trends and adapt your strategies accordingly. It provides valuable insights into how your company measures up against the competition, enabling you to make informed decisions and drive continuous improvement.

Functional Benchmarking

  • Functional benchmarking is a type of benchmarking analysis that focuses on specific functions or processes within an organization.
  • It involves identifying best practices and performance metrics from other companies or industries that excel in the same function.
  • By studying and adopting these practices, organizations can improve their own processes and achieve better results.
  • For example, a manufacturing company can benchmark its supply chain management processes against a leading logistics provider to identify areas for improvement.
  • Functional benchmarking enables organizations to learn from others' successes and apply them to their own operations, leading to increased efficiency and effectiveness.

Generic Benchmarking

--Generic Benchmarking--

Generic benchmarking, a type of benchmarking analysis, involves looking outside of one's industry to identify best practices and innovative solutions. It allows companies to gain inspiration from other sectors and adapt successful strategies to their own context.

To conduct generic benchmarking effectively, consider the following:

  • Look for companies facing similar challenges in different industries.
  • Analyze their approaches to problem-solving and performance improvement.
  • Identify transferable practices that can be implemented in your own organization.
  • Examples of generic benchmarking include studying customer service techniques in the hospitality industry for application in retail, or learning about supply chain management from the automotive sector to improve efficiency in manufacturing.

By exploring ideas and practices from diverse sources, generic benchmarking encourages fresh perspectives and can lead to innovative and successful outcomes.

Benefits of Benchmarking Analysis

Identifying performance gaps.

Benchmarking analysis allows businesses to identify performance gaps by comparing their performance against industry leaders or competitors. This helps in understanding areas where the company is falling behind and needs improvement.

For example, a retail company may discover through benchmarking analysis that its customer service response time is slower compared to its competitors. By addressing this performance gap, the company can enhance customer satisfaction and loyalty. Identifying performance gaps enables organizations to set realistic improvement goals and allocate resources effectively to bridge the gaps between their current performance and desired benchmarks.

Driving Continuous Improvement

  • Continuous improvement is a fundamental objective of benchmarking analysis.
  • It involves using insights gained from benchmarking to identify areas of improvement and implement changes.
  • By comparing performance against industry leaders, organizations can identify best practices and strategies to enhance their own operations.
  • Benchmarking enables companies to set performance goals, track progress, and establish a culture of ongoing improvement.
  • It helps identify areas where innovation and efficiency can be enhanced, leading to increased productivity and cost savings.
  • Continuous improvement through benchmarking analysis allows organizations to stay competitive and adapt to changing market conditions.

Enhancing Competitive Advantage

Benchmarking analysis is a powerful tool for companies aiming to gain an edge over their competitors. By identifying industry best practices and comparing performance against them, organizations can uncover areas for improvement and implement changes that lead to increased competitiveness.

For example, analyzing the supply chain processes of successful companies can provide insights on reducing costs and improving efficiency. Similarly, studying the marketing strategies of industry leaders can guide companies in enhancing their brand positioning and customer acquisition techniques. Benchmarking analysis enables businesses to adapt and adopt successful strategies, ultimately enhancing their competitive advantage in the market.

Challenges and Limitations of Benchmarking Analysis

Data availability and quality.

One of the challenges in benchmarking analysis is ensuring the availability and quality of data. Reliable data is crucial for accurate benchmarking comparisons. Without access to relevant and up-to-date data, organizations may face difficulties in identifying areas for improvement or making effective comparisons.

For example, if a company is unable to obtain comprehensive industry data or lacks internal data on key performance metrics, it hampers their ability to benchmark effectively. To overcome this challenge, organizations can consider utilizing industry reports, surveys, or collaborating with benchmarking partners to gather comprehensive data sets.

Additionally, implementing data quality control measures, such as data validation and verification processes, helps ensure the accuracy and reliability of the benchmarking analysis.

Lack of Benchmarking Partners

Finding suitable benchmarking partners can be challenging for organizations. Limited accessibility to industry data or unwillingness of competitors to share information can hinder the benchmarking process. In such cases, companies can explore alternative options. They can consider benchmarking within their own organization by comparing different business units or departments.

Additionally, they can seek industry associations or research organizations that provide benchmarking data and insights. Another approach is to benchmark against best practices and standards established by industry leaders. While having benchmarking partners is ideal, organizations can still derive valuable insights and identify areas for improvement through alternative benchmarking methods.

Resistance to Change

Implementing benchmarking analysis within an organization may face resistance from employees and stakeholders. People naturally resist change due to fear of the unknown or disruption to established processes. To overcome this, it is crucial to communicate the benefits of benchmarking analysis, demonstrating how it leads to improved performance and competitiveness. Encourage involvement from all levels of the organization to increase buy-in and ownership. Address concerns and provide training to help employees adapt to new ways of working. By highlighting successful case studies and sharing best practices from other companies, you can inspire confidence in the value of benchmarking analysis and ease resistance to change.

Final thoughts

Benchmarking analysis is a valuable tool that businesses can utilize to evaluate their own performance and compare it to industry standards or competitors. This step-by-step guide breaks down the process, starting with identification and selection of benchmarks. It then focuses on data collection, analysis, and interpretation, with an emphasis on the importance of choosing relevant metrics and accurate data sources.

The article also delves into the significance of setting realistic goals and making actionable recommendations based on the analysis. By following this guide, businesses can take informed decisions to improve their performance and gain a competitive edge in the market.

Ready to 20x your analysis? Get started today!

Looking to maximize speed and quality of your company and market analysis? Comparables.ai is crafted for individuals like you who desire to transform their work and achieve their goals with greater ease. It's time you experience the power of our cutting-edge AI and the world's most comprehensive dataset of companies. Make the smart move – your future self will thank you!

Linkedin

Root out friction in every digital experience, super-charge conversion rates, and optimize digital self-service

Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve

Increase revenue and loyalty with real-time insights and recommendations delivered straight to teams on the ground

Know exactly how your people feel and empower managers to improve employee engagement, productivity, and retention

Take action in the moments that matter most along the employee journey and drive bottom line growth

Whatever they’re are saying, wherever they’re saying it, know exactly what’s going on with your people

Get faster, richer insights with qual and quant tools that make powerful market research available to everyone

Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts

Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market

Explore the platform powering Experience Management

  • Free Account
  • For Digital
  • For Customer Care
  • For Human Resources
  • For Researchers
  • Financial Services
  • All Industries

Popular Use Cases

  • Customer Experience
  • Employee Experience
  • Employee Exit Interviews
  • Net Promoter Score
  • Voice of Customer
  • Customer Success Hub
  • Product Documentation
  • Training & Certification
  • XM Institute
  • Popular Resources
  • Customer Stories

Market Research

  • Artificial Intelligence
  • Partnerships
  • Marketplace

The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results, live in Salt Lake City.

  • English/AU & NZ
  • Español/Europa
  • Español/América Latina
  • Português Brasileiro
  • REQUEST DEMO

Competitive benchmarking: Best practice guide

What is competitive benchmarking, and how can you use it to get ahead of your competition? Read on to learn how to create KPIs that effectively chart success and the best practices for developing a competitive benchmarking strategy.

What is competitive benchmarking?

Competitive benchmarking analysis seeks to understand your brand’s success against others within your industry. You might evaluate their business strategy, their practices, or the products and services they offer to see whether you compare favorably or unfavorably. Using key performance indicators, you can create a set of benchmarks for yourself to match up to others in your sector and understand where the gaps are .

Competitive benchmarking is a useful tool for understanding where you can go next with your brand strategy , and to see where you could be servicing your customers more effectively. It can help you see why your audience might choose a competitor over you , and work on strategies to attract them to your offering instead.

Typical competitive benchmarking includes metrics such as:

  • Customer engagement on social channels
  • Brand awareness
  • Customer experience ratings, such as satisfaction , ease of use , and more
  • Search engine results

Stay up to date on the market research trends of 2022 with our report

Types of competitive analysis

Competitive benchmarking can be divided into three types: performance benchmarking, strategic benchmarking, and process benchmarking.

Performance benchmarking is where you compare your brand’s performance across revenue, brand awareness , social media engagement , and more to see how you fare against others in your industry. What results do you generate, and how could you improve against your competitors?

Strategic benchmarking is where you evaluate how your brand goes about its business in comparison to competitors. What business models do they use? What planning and execution style does your company use, and how does that compare to the market leader’s approach? By taking a deep look at how you plan for and carry out your strategy, you can become more effective at implementing change and improving.

Process benchmarking is the analysis of how well your current processes work within your business. Evaluating how your processes work in comparison to your competitors can help you to see the differences between your business and the next, allowing you to take inspiration from competitors or work on providing unique selling points to your customers. Becoming more efficient is a key reason to benchmark in this way.

One of the main advantages of breaking down competitive benchmarking in this way is that your overall business strategy can be overhauled in a manageable way. Rather than solely focusing on performance, you can also evaluate the processes and strategies that lead to performance results.

Competitive benchmarking metrics

Before you can create or use competitive benchmarking metrics, you’ll need to gather data and determine what key performance indicators will form the basis of your benchmark.

Gathering data

Before benchmarking yourself against your competitors, you’ll need to complete some thorough research to define your key performance indicators. These will help you to develop some specific metrics to measure yourself against now and in the future.

You might try finding data by doing:

  • Business research : What is their annual revenue? How many people do they employ? What are their online reviews like?
  • Brand research : What is their share of voice or share of wallet in your sector? What are their online reviews like?
  • SEO research : How do they fare in search engine results? Are they using paid social marketing to reach their audience ?
  • Survey research : How do they survey their customers? How often do they ask for feedback?
  • Social media research : What level of social media engagement metrics do they generate? How do their social media metrics compare against your own?
  • User experience research : Are their digital platforms mobile-optimized? Are their sites easy to navigate? Is it easy to understand their product and service offering, and make a purchase?
  • Content analysis : What content do they produce, how much, and when? Does it get high levels of engagement?

Of course, completing this research and creating competitive benchmark metrics without the available data can be difficult. That’s why since 2019, Qualtrics’ XM Institute™ has completed its annual XMI Customer Ratings  – Digital, a cross-industry, open-standard benchmark that provides a much-needed reference point for companies who want to compare their digital customer experience (CX) against their industry competitors.

See how our new Digital Experience Metrics can help you benchmark more effectively

Completing competitive benchmarking analysis with metrics

Qualtrics’ in-depth research into brands’ relationships with their audiences led to the definition of three critical aspects of a customers’ digital experience:

  • Emotion: As measured by customer satisfaction (CSAT) scores , how do customers feel about the brand?
  • Effort: As measured by customer effort scores (CES) , how easy is it for customers to complete their tasks on a brand’s platform?
  • Success: As measured by task completion, how successful are customers’ experiences with the brand?

These three elements form the basis of Qualtrics’ DX3 metrics methodology. It is Qualtrics’ simplified approach for measuring meaningful digital experiences and understanding the drivers of conversion, loyalty , and improved customer lifetime value .

Brands often complete CSAT and CES scoring to create internal benchmarks for comparison over time. However, these metrics can be difficult to incorporate into your competitive benchmarking, as the results are often private data.

Qualtrics’ BrandXM™ provides you with insight into your competitors’ performance, and helps you to create data-led strategies for improvement. Unlike other competitive benchmarks, Qualtrics solutions take into account DX3 metrics methodology, which measures economic value alongside other typical indicators of success.

Use the competitive benchmarking study to see how you’re performing

The DX3 metrics in practice

DX3 metrics are able to tie business outcomes directly to increases or decreases in the measurements brands use to benchmark their success, either against themselves or against competitors.

Our latest findings:

  • As customer satisfaction (CSAT) improves, customer spend increases by up to 37%
  • As customer effort decreases, customer spend increases by up to 23%
  • Successful task completion isn’t a useful metric

This means there isn’t a strong relationship between the completion of a task and the amount customers spend. Most customers are able to complete a task - meaning that if they can’t do so on your site, you’re falling behind your competitors significantly.

Positive sentiment and low effort scores drive loyalty

Digital loyalty - or the likelihood to return to a brand’s site - is significantly increased when customers have a positive sentiment toward the brand and find it easy to complete their tasks.

Ease and satisfaction graph

Brands that don’t focus on improving these two factors may well find themselves falling behind as consumers turn their digital loyalty to immediate competitors and industry leaders.

Gaps in Emotion, Effort, and Success reduce potential revenue

For eCommerce industries, gaps in emotion, effort, and success have a negative impact on potential revenue to be generated.

Expected revenue lost form digital experience gaps

For businesses that sell their goods and services online, one way in which they can improve their bottom line above their competitors is to invest in their digital experience. Given that brands are failing to satisfy or make their experiences low-effort , this is an easy win for brands looking to get ahead.

Creating a strategic benchmarking approach

Using the DX3 metrics methodology as a guide, your strategy to outdo your competition becomes more sophisticated and increases ROI on your efforts. Rather than limiting your efforts to improve using only internal benchmarking, Qualtrics can provide you with competitor analysis and business strategies for improving your market position.

Developing a brand experience strategy using DX3

Rather than aiming to merely increase engagement or a site’s performance, using DX3 metrics alongside traditional competitive benchmarking can help you to develop a more rounded strategy for improved brand experience .

Here’s how you can use Qualtrics BrandXM with the DX3 methodology to develop your brand into an industry leader.

Improve the brand experience

Your brand experience should not only meet customer expectations , but exceed them. Brand sentiment and brand perception can go a long way to influencing a customer to choose you over the competition - make sure you invest in a strategy that not only measures this but creates insights-led action to tackle weaker areas. Customer satisfaction plays a large role in this, so make sure you’re developing a sophisticated survey program to understand precisely where pain points and successes lie.

Reduce customer burden

If there are obstacles in your customers’ way, they’re not going to choose your experience over your competitors’. It’s not enough to just meet best industry practices - you’ll need to be exceptional in making the digital customer journey a smooth process to stand out.

Assume completion is the minimum

Your customers should be able to do what they intend to do when interacting with their brand - fix the pain points in your processes to make sure you’re meeting the baseline for the competition.

The best tool for competitive research

Understanding your brand’s current position in the market and seeing how you stack up against the competition is key for improving your business outcomes.

Qualtrics competitive benchmarking tool

Using Qualtrics, you can better:

  • Understand how you compare to competitors in your industry
  • Identify where you can leverage strengths and bolster weaknesses
  • Find top areas for improvement to keep you ahead of your competition
  • Create a cohesive brand strategy for a winning customer experience
  • Improve your customer experience and brand renown

Qualtrics CustomerXM helps you to prioritize customer feedback, ensuring that you make changes that will drive the highest business impact.

See how you can outdo your competitors with BrandXM

Qualtrics // Experience Management

Qualtrics, the leader and creator of the experience management category, is a cloud-native software platform that empowers organizations to deliver exceptional experiences and build deep relationships with their customers and employees.

With insights from Qualtrics, organizations can identify and resolve the greatest friction points in their business, retain and engage top talent, and bring the right products and services to market. Nearly 20,000 organizations around the world use Qualtrics’ advanced AI to listen, understand, and take action. Qualtrics uses its vast universe of experience data to form the largest database of human sentiment in the world. Qualtrics is co-headquartered in Provo, Utah and Seattle.

Related Articles

November 7, 2023

Brand Experience

The 4 market research trends redefining insights in 2024

September 14, 2023

How BMG and Loop use data to make critical decisions

August 21, 2023

Designing for safety: Making user consent and trust an organizational asset

June 27, 2023

The fresh insights people: Scaling research at Woolworths Group

June 20, 2023

Bank less, delight more: How Bankwest built an engine room for customer obsession

June 16, 2023

How Qualtrics Helps Three Local Governments Drive Better Outcomes Through Data Insights

April 1, 2023

Academic Experience

How to write great survey questions (with examples)

March 21, 2023

Sample size calculator

Stay up to date with the latest xm thought leadership, tips and news., request demo.

Ready to learn more about Qualtrics?

Competitive Benchmarking: What It is and How to Do It

case study benchmark analysis

Table of contents

Want to understand what your competitors are doing well and where there’s an opportunity for you to overtake them? You need competitive benchmarking.

This involves studying and setting benchmarks against industry leaders, disruptors, key competitors, and others. The reason? To get a full picture of where you stand against competitors while finding opportunities to improve.

Ready to learn more about it? Dive in as we discuss the following:

  • What is competitive benchmarking?
  • What are the benefits of competitive benchmarking?
  • Tips for determining competitive benchmarks for your organization

marketing_overview_hubspot_ga_dashboard_databox

What is Competitive Benchmarking?

Competitive benchmarking is a process of researching competitors, industry leaders – in short, anyone who serves the same audience or offers the same product as you do.

The aim? To study the strategies and practices competitors use and get a comparative overview of how well you’re doing in the market.

The key to successful competitive benchmarking, however, is to stay in charge of the process by pre-defining competitors to analyze. Typically, folks get carried away because they study one too many competitors.

So what’s the ideal number of competitors you should be studying? Over half, 54.3%, of our contributors benchmark against 4-5 competitors. 31.4% do so against 2-3 competitors and only 14.3% compete against 5+ businesses.

How many competitors do you benchmark against?

Related : Benchmark Reporting: How to Prepare, Analyze and Present a Good Benchmark Report?

What are the Benefits of Competitive Benchmarking?

Competitive benchmarking brings a plateful of benefits. These include:

  • Get a full overview of where you stand against your competitors and in the market include what the broad target audience is saying.
  • Improve the value you offer to prospects and customers alike by studying and improving upon the experiences others in the industry offer.
  • Grow a culture and mindset of continuous improvement in terms of everything – from your marketing to customer and product quality.
  • Eventually outperform competitors and grow your sales while getting an understanding of how you can differentiate yourself from others.

Tips for Determining Competitive Benchmarks for Your Organization

With the basics done, let’s look at how you can set up benchmarks for yourself and how to excel at competitive benchmarking.

Here’s a quick list followed by the details:

  • Start with market research
  • Understand your goals before benchmarking  
  • Always identify pointers and companies to study against
  • Co-decide who to benchmark against
  • Consider reputation and product benchmarking
  • Keep an eye on potential industry disruptors
  • Also look at the biggest players
  • Look at quantitative data too

1. Start with market research

This one’s a hat tip to Daniela Sawyer from FindPeopleFast . Market research is helpful because it stops you from missing out on studying important names in your haste to start competitive benchmarking.

In this regard, Sawyer explains the process they follow: “I will follow the following steps to choose competitive benchmarks:

  • First, I conduct an analytical market research using available metrics. It needs to be accurate or close to the real output so that next steps can be accurate too.
  • Following that, I attempt to identify both my actual and potential competitors. Usually, I try to identify real competitors. With the competitors’ list, I make a competitive report.
  • Finally, I analyze the metrics and data so that the weak points in my business can be found. With proper research and listings of competitors, the analysis process is easy for me. Weak points, intermediate points, and competitors’ traffic sources are the main objectives for analyzing the data.”

“After a long list is analyzed in the last steps,” Sawyer continues “I target 3 to 5 competitors to benchmark against. It takes time to choose these 3 to 5 competitors because they are the real competitors for my business. I ensure a minimum of 1 final competitor to benchmark against from each category.”

2. Understand your goals before benchmarking 

“To avoid getting overwhelmed with all the data we could potentially benchmark against competitors, we honed in on our specific brand goals to assess which KPIs to focus on benchmarking,” comments Stephen Light from Nolah Mattress .

Put another way, in addition to market research to jump-start competitive benchmarking, you need to get clarity on your goals. For instance, if you are focusing on customer loyalty, there’s no point in studying a competitor’s brand awareness metrics.

Light shares their example too. “There are tons of social media metrics we could focus on, but we knew that engagement on those channels isn’t the most important for our particular eCommerce brand.”

“We knew that for a mattress brand like us, brand awareness and share of voice is where we needed to focus, and that benchmarking traffic trends in certain periods – like the traffic we get from different sources compared to our competitors – was of greater importance,” Light explains.

Leanna Serras of FragranceX takes the same approach. “We choose our competitive benchmarks based on our current brand goals.”

“If we are focusing on our brand awareness, we will compare our social media posting frequency and engagement rates with that of our competitors,” Serras notes.  “If we are focusing on sales, we will compare our website traffic and average visit duration with that of our competitors.”

In short, “you have to benchmark for your goals, not for everything, or you’ll get lost” in Light’s words.

Related : Goals Based Reporting: Everything You Need to Know

3. Always identify pointers and companies to study against

The clearer you are on this, the better your competitive benchmarking. Take CocoSign , for example, they have a thorough plan of who to study themselves against and what to study.

Explains Stephen Curry: “Our competitive benchmarks are based on research on the competition and the identification of critical competitive metrics.

Usually, we benchmark against three to five competitors depending on the objective. The list will include top performers in our industry, immediate competition referring to those we rank equally, and competitors who operate offline but with a sizable market share.”

Curry isn’t alone in benchmarking against these folks. 42%, the majority of our contributors, benchmark against close competitors.

31% have a mixed selection as Curry’s team has and over 25% study themselves against key industry players. Only 9% keep tabs on the industry disruptors in particular.

What type of competitors do you benchmark against?

“In addition, we include pertinent competitive metrics mainly depending on the services we offer and those offered by companies against which we benchmark,” Curry goes on.

“That’s because benchmarking paints a more realistic picture when we include companies that don’t necessarily compete with our services but for the same audience. It also adds value to focus on ratios and rates rather than absolutes, especially when starting.”

4. Co-decide who to benchmark against

Continuing on what to analyze among competitors, Purrweb’s Sergey Nikonenko recommends not making the decision in isolation.

At Purrweb, for example, Nikonenko writes, “we use pre-existing KPIs for a competitive benchmark and we usually benchmark four to five competitors. We choose our KPIs and determine what metrics capture them.”

“In the last few years, we have been choosing competitive benchmarks by identifying the competitors that are similar to our industry. Our marketing team measures their products offering, audience, size, location, etc.,” Nikonenko elaborates.

To top that, Nikonenko shares: “We also consult with all parts of the business to see what would be useful to include.”

Consulting with your team and other business departments on competitive benchmarking is critical for ensuring you don’t miss any important pointers to benchmark against. Doing so means your benchmarking metrics are likely more thorough and, as a result, your competitive study will turn out useful.

5. Consider reputation and product benchmarking

Essentially, your marker reputation can make a lot of difference. In fact, a positive reputation drives word of mouth, referrals – even brand awareness. It also slips social proof into the picture, convincing prospects to buy from you for the name you’ve built.

Considering the role that reputation plays, it makes sense to dive into reputation benchmarking.

At HouseCashin , for example, Marina Vaamonde observes: “We compare our reputation with those of our competitors by gathering data related to how customers perceive us and our competitors.”

The metrics they look at? “Customer satisfaction rates, social media engagement, brand awareness, net promoter scores, and more.”

Speaking of metrics to study in competitive benchmarking, our respondents reveal most of them – 74% to be specific – study growth assessment metrics.

Some 50% also look at product success and little less than 50% study social media research.

Most important competitive benchmarking metrics

“Regarding product benchmarking, we look at the product offerings of competitors in the marketplace and look at how well they provide value to their customers,” adds Vaamonde.

“However, we also look at what customer segments our competitors are targeting. If they’re targeting very different segments from ours, it doesn’t make sense to benchmark products. Success for them is different from success for us because it’s no longer an apples-to-apples comparison.”

Meaning: if you and your competitors serve the same audience, you’ll want to see how they provide value to their prospects and customers. To this end, study their marketing campaigns, social media messaging, customer service, product onboarding process, and so on.

After all, the more value you can provide, the better you can convert prospects and the more loyal customers you’ll have.

On that note, Andrew Raso from Online Marketing Gurus Australia also goes on to say that they “track the performance of companies that target my audience but are not my direct competitors.”

The reason: “It helps me catch fresh ideas for my market or innovative methods they use for growth. I do it because I feel that I know my market inside and out,” highlights Raso.

“Analyzing this data gives me valuable insight into the needs and expectations of my target audience. That allows me to recognize new business opportunities and areas for improvement.”

Related : Direct vs. Indirect Competition: Most Important Things You Can Learn from Monitoring Both

6. Look at quantitative data too

“We almost exclusively benchmark on qualitative data, messaging, and positioning,” outlines Alex Birkett .

“In terms of quantitative data – revenue, traffic, leads – we’re running our own race,” Birkett notes. “It’s nice to know where to aim, but at the end of the day, it’s not useful to benchmark to others when we have a different strategy.”

“However, the language people use to describe us versus competitors, the value propositions competitors use on their homepage, and qualitative comparison data we collect in our sales calls and surveys help us take action.”

“They help us optimize our copywriting, invest in unique channels, and inform our sales enablement materials and sales process,” explains Birkett.

In your competitive benchmarking too, note down key competitors’ value propositioning, messaging, and other qualitative data so you can continue marketing and converting better.

PRO TIP: How Well Are Your Marketing KPIs Performing?

Like most marketers and marketing managers, you want to know how well your efforts are translating into results each month. How much traffic and new contact conversions do you get? How many new contacts do you get from organic sessions? How are your email campaigns performing? How well are your landing pages converting? You might have to scramble to put all of this together in a single report, but now you can have it all at your fingertips in a single Databox dashboard.

Our Marketing Overview Dashboard includes data from Google Analytics 4 and HubSpot Marketing with key performance metrics like:

  • Sessions . The number of sessions can tell you how many times people are returning to your website. Obviously, the higher the better.
  • New Contacts from Sessions . How well is your campaign driving new contacts and customers?
  • Marketing Performance KPIs . Tracking the number of MQLs, SQLs, New Contacts and similar will help you identify how your marketing efforts contribute to sales.
  • Email Performance . Measure the success of your email campaigns from HubSpot. Keep an eye on your most important email marketing metrics such as number of sent emails, number of opened emails, open rate, email click-through rate, and more.
  • Blog Posts and Landing Pages . How many people have viewed your blog recently? How well are your landing pages performing?

Now you can benefit from the experience of our Google Analytics and HubSpot Marketing experts, who have put together a plug-and-play Databox template that contains all the essential metrics for monitoring your leads. It’s simple to implement and start using as a standalone dashboard or in marketing reports, and best of all, it’s free!

marketing_overview_hubspot_ga_dashboard_preview

You can easily set it up in just a few clicks – no coding required.

To set up the dashboard, follow these 3 simple steps:

Step 1: Get the template 

Step 2: Connect your HubSpot and Google Analytics 4 accounts with Databox. 

Step 3: Watch your dashboard populate in seconds.

“As for specific tools,” Birkett notes “we use Wynter very heavily. We also do our own heuristic audits of competitive websites to determine what the key benefits they’re leaning into are and how we can differentiate our positioning in the market.”

Other tools used include Ahrefs, SEMrush, and Google Keyword Planner – the top 3 most leveraged competitive benchmarking tools used by 74%, 53%, and 40%, respectively of our contributors.

Which tool do you use to monitor your competitors

7. Also look at the biggest players

While you might be too busy looking to your left and right, Ronen Yuval from Karma recommends monitoring the biggest market players at the top.

“I won’t replicate what they can afford with multi-million budgets for product development and marketing,” Yuval admits. “But I can learn from their strategies.”

“Their actions provide insight into where the market is heading and help me gain clarity about industry trends.”

Yuval also points out: “Larger companies also spend tons of time and resources on consumer research to identify the preferences, attitudes, motivations and buying behavior of targeted customers.”

“So if you carefully monitor the changes in their product, the site navigation, email marketing content, etc., you can get free insights to inform your marketing strategies.”

So you know what to do, right? Here’s your checklist. Study industry leaders’:

  • Website architecture and user experience
  • Product, email, social, and blog marketing campaigns
  • Research reports including findings about consumer behavior and motivation

Related : How to Do an SEO Competitive Analysis: A Step-by-Step Guide

8. Keep an eye on potential industry disruptors

Why? Because​ ​ Zenpost’s Dave ​Polykoff says “they often come to the market with an offering that they expect may close a gap in a vacant niche.”

“Their brand-new products and strategies may even change the face of the niche,” adds Polykoff.

It’s why Polykoff shares, “I monitor new players on the market, especially small, emerging businesses that have received funding. These are likely new industry disruptors that might be growing faster than the industry average.”

The take home message? “Benchmarking against new smaller companies in your field can help you better understand potential threats and be ready to use some of their ideas to your advantage.”

Level Up Your Business with Databox

In short, when it comes to competitive benchmarking, it’s best to start with being clear on your goals and researching your market to understand who your direct and indirect competitors are.

From there, go on to identify who to benchmark against and what metrics you want to analyze.

Databox’s Benchmarks feature allows you to see how your company compares to others like yours and helps you set better strategy and business goals. Opt-in now to get free access !

Share on Twitter

Get practical strategies that drive consistent growth

8 Best Reporting Tools in 2024

case study benchmark analysis

7 Key Product Management Metrics and KPIs to Track in 2024

B2b marketing benchmarks to help you set goals for 2024.

case study benchmark analysis

Build your first dashboard in 5 minutes or less

Latest from our blog

  • SEO for Startups: Tips and Warnings from 100+ Experts February 20, 2024
  • Unlocking Success With the Databox Customer Lifecycle Framework February 19, 2024
  • Metrics & KPIs
  • vs. Tableau
  • vs. Looker Studio
  • vs. Klipfolio
  • vs. Power BI
  • vs. Whatagraph
  • vs. AgencyAnalytics
  • Product & Engineering
  • Inside Databox
  • Terms of Service
  • Privacy Policy
  • We're Hiring!
  • Help Center
  • API Documentation

Pledge 1%

benchmarking process

8 Steps of the benchmarking process

Reading time: about 7 min

Businesses are always striving for high performance, from creating more efficient processes to selling more of their products and services. But how does a company determine whether it is successful?

Through the benchmarking process, any business can compare itself against a standard and develop a consistent way of measuring performance. Below we’ll cover what benchmarking is, how the benchmarking process can help your business, and how to create benchmarks for a successful improvement plan. 

What is benchmarking?

In business, benchmarking is a process used to measure the quality and performance of your company’s products, services, and processes. These measurements don’t have much value on their own—that data needs to be compared against some sort of standard. A benchmark.

For example, suppose it takes 30 minutes to produce your product. Is the 30-minute measurement good or bad? The only way for you to know is to compare against other data, such as the time it takes another organization to produce a similar product. If another organization can produce the same type of product in less than 30 minutes, you can use their time as a benchmark for measuring your own processes and procedures.

The objective of benchmarking is to use the data gathered in your benchmarking process to identify areas where improvements can be made by:

  • Determining how and where other companies are achieving higher performance levels than your company has been able to achieve.
  • Comparing the competition’s processes and strategies against your own.
  • Using the information you gather from your analyses and comparisons to implement changes that will improve your company’s performance, products, and services.

Common areas that you may want to target for benchmarking analysis include cost per unit, time to produce each unit, quality of each unit, and customer satisfaction. The performance metrics you get from these targets can be compared against others to help you determine best practices for improving your operations.

Benchmarks vs. KPIs

While both benchmarks and KPIs (key performance indicators) help you measure performance, they are distinct. Where benchmarks act as a reference point to compare performance levels, KPIs measure performance against stated objectives.  

Why is benchmarking important?

The goal of your business should be to grow, improve processes, increase quality, decrease costs, and earn more money. Benchmarking is one of many tools you can use as part of any continuous improvement model used within your organization.

Consistent benchmarking can help you:

  • Improve processes and procedures.
  • Gauge the effectiveness of past performance.
  • Give you a better idea of how the competition operates, which will help you to identify best practices to increase performance.
  • Increase efficiency and lower costs, making your business more profitable.
  • Improve quality and customer satisfaction.

Types of benchmarking

There are many different types of benchmarking that fall into three primary categories: internal, competitive, and strategic.

Internal benchmarking

If other teams or organizations within your company have established best practices in processes similar to yours, internal benchmarking involves analyzing what they are doing so you can find areas where you can improve and be more efficient.

For example, you could compare the performance of one warehousing and shipping site against another warehousing and shipping site. The site with superior performance simply needs to share their processes and procedures so that the entire company benefits from increased performance. 

Competitive benchmarking

This type of benchmarking is a comparison of products, services, processes, and methods of your direct competitors. This type gives you insight into your position within your industry and what you may need to do to increase productivity.

For example, you can compare the customer satisfaction of a competitor’s product to yours. If your competitor is getting better customer reviews, you need to analyze what the difference is and figure out how to improve the quality of your product. 

Strategic benchmarking

Use this type of benchmarking when you need to look beyond your own industry to identify world-class performance and best practices so you can look for ways to adapt their methods to your procedures and processes.

8 steps in the benchmarking process

1. select a subject to benchmark.

What to benchmark is just as important as how to benchmark it. Executives and other senior management should be involved in deciding which processes are critical to the company’s success. Prioritize the processes based on which metrics are most important to all stakeholders, with an emphasis on processes or functions that are easily quantifiable. After prioritizing, select and define the measures you want to collect.

2. Decide which organizations or companies you want to benchmark

Determine if you are going to benchmark processes within your own company, a competitor, or a company outside of your industry.

It may be hard to collect all the data you want if you benchmark a direct competitor. So you should select several different organizations to study in order to get the data you need. Gather information from several sources to get the most detailed information about the organization you select to study.

3. Document your current processes

Map out your current processes so you can identify areas that need improvement and more easily compare against the chosen organization.

4. Collect and analyze data

This step is important—but it can prove difficult when you are trying to gather data from a competitor because a lot of that information may be confidential. Gather information through research, interviews, casual conversations with contacts from the other companies, and with formal interviews or questionnaires.

You can also collect secondary information from websites, reports, marketing materials, and news articles. However, secondary information may not be as reliable.

After you have collected enough data, get all stakeholders together to analyze the data. 

5. Measure your performance against the data you’ve collected

Look at the data you’ve collected side by side with the metrics you gathered from your analysis of your own processes. You may want to layer your performance metrics on top of your process diagrams or map out your competitor’s processes to more easily see where you’re falling behind.

As you analyze the comparisons, try to identify what causes the gaps in your process. For example, do you have enough people and are they sufficiently trained to perform assigned tasks? Perhaps there are multiple steps that can be automated or combined to streamline workflow. Brainstorm ideas to effectively and efficiently fill those gaps.

6. Create a plan

Create a plan to implement agreed-on changes that you have identified as being the best to close performance gaps. Implementation requires total buy-in from the top down. Your plan must include clearly defined goals and should be written with the company’s culture in mind to help minimize any pushback you may get from employees.

7. Implement the changes

Closely monitor the changes and employee performance. If new processes are not running smoothly as expected, identify areas that need to be tweaked. Make sure all employees understand their jobs, are well trained, and have the expertise to complete their assigned tasks.

Document all processes and make sure all employees have access to documentation and instructions so that all are on the same page working toward the same goal. 

8. Repeat the process

After successfully implementing a new process, it’s time to find other ways to improve. The benchmarking process is one of continual improvement and iteration. Review the new processes you’ve implemented and see if there are any changes that need to be made. If everything is running smoothly, look to other areas or more ambitious projects that you may want to benchmark and start the process again.

When you correctly implement and follow the continuous practice of benchmarking, your company will grow, and you will keep up with (or even surpass) your competitors.

benchmarkig

Consider these 7 fundamental change management models as you implement new processes.

Lucidchart, a cloud-based intelligent diagramming application, is a core component of Lucid Software's Visual Collaboration Suite. This intuitive, cloud-based solution empowers teams to collaborate in real-time to build flowcharts, mockups, UML diagrams, customer journey maps, and more. Lucidchart propels teams forward to build the future faster. Lucid is proud to serve top businesses around the world, including customers such as Google, GE, and NBC Universal, and 99% of the Fortune 500. Lucid partners with industry leaders, including Google, Atlassian, and Microsoft. Since its founding, Lucid has received numerous awards for its products, business, and workplace culture. For more information, visit lucidchart.com.

Related articles

case study benchmark analysis

Let’s dig into why understanding how a tech-savvy customer base interacts with their mobile devices can help you to transform your business model to meet their expectations and provide a user-focused experience. 

Ensure that important processes are completed the right way, every time. In this article, we will explain why your business needs efficient workflows and show you some workflow examples to help you get started. 

Bring your bright ideas to life.

or continue with

  • Business strategy |
  • How to use benchmarking to set your sta ...

How to use benchmarking to set your standards for success

Alicia Raeburn contributor headshot

How do you know when your work is successful? Benchmarking is a data-driven process that helps you create your own standards to measure success. Setting benchmarks is a simple way to set clear expectations for your team. In this article, learn the different types of benchmarking and the steps to create your own benchmarks.

Success is a vague term—what is it? And how do you know when you, your projects, and your business are successful? The truth is, everyone measures their success differently. This makes success hard to define, especially when you’re managing a team or growing a business.

What is benchmarking?

A benchmark is a predetermined standard, and benchmarking is the process of setting those standards. To determine benchmarks, you need to measure your work against something else. There are a variety of things you can set benchmarks against, including:

Competitors. Comparing your work or desired results against your competitors shows you what’s normal in the industry and what customers expect. Once you know this, you can adjust your business, product, or messaging to remain competitive.

Previous results. Using previous results as your benchmarks shows you if you’re improving internally and helps you identify gaps in your processes and workflows . If you’re improving, you can double down on what you’re doing (because it’s working). If you’re not, this is a great opportunity to make changes.

Goals. Using goals as a benchmark shows if your results are what you expected or initially wanted when you began. If you’re falling short, you might need to adjust your goals to make sure they’re achievable.

Executive reporting guide: Universal reporting for today’s leaders

Learn strategies to monitor business performance in real-time—so you can make better, more informed decisions for your team.

Universal reporting for today's leaders ebook banner image

What’s the difference between benchmarking and setting goals?

You can use both goals and benchmarks to analyze project outcomes, but they’re slightly different in practice. The fundamental difference is the target. Goals are what you want to achieve and tend to be growth-oriented, while benchmarks compare your actual results against a reference. Put another way, goals are what you aspire to achieve, while benchmarks compare your performance to another reference point. 

For example, imagine you’ve set a business goal to hit $500K in recurring revenue this year. This goal illustrates what you want to achieve in order to grow and expand your cash flow. On the other hand, your revenue benchmarks would show you the amount of money you earn compared to competitors or what you made last year. Usually, you can use benchmarks to inform goals. If you know you hit $200K in revenue last year and you’re growing exponentially, then $500K is a realistic goal.

Benchmarking versus competitor analysis

Both benchmarking and competitor analyses use competitor research to determine how other companies operate. The difference between the two is scope—benchmarking is smaller and focuses on individual business processes, while competitive analysis is larger in scope and focuses on big-picture strategies and goals. 

With benchmarking, you use competitor research data to review your own processes and best practices. You record and save these as benchmarks, and use them to set the standard for how you work. This is slightly different from a competitive analysis, where you use the data to review your overall business strategy . Whether you decide to use benchmarking or competitor analysis depends on the scope of your project. If you're only looking at processes, benchmarking works fine. If you’re looking for data on larger strategies and goals, you might want to use a competitor analysis.

Why is benchmarking important?

Benchmarking helps you set the standards for how you work. But instead of choosing standards based on opinions or ideas, benchmarking is data-driven. This ensures that your work standards are targeted and focused on things that have the greatest impact.

Benchmarking also shows employees the rationale for workplace expectations and goals. It gives you data to demonstrate why your team's daily tasks are important, so everyone knows what they're working for and why.

There are many benefits to benchmarking, including:

Define and determine success . With benchmarking, you get to decide what success looks like for your company. For example, if your benchmark for success is a consistent 10% increase in lead generation YoY and you’re on track to hit 11%, you’ll know you’ve exceeded expectations.

Identify gaps. Benchmarking reveals gaps as compared to your competition. For example, it’s hard to stay competitive if you’re producing three new product features in the same timeframe your competitors are producing eight.

Set higher standards for product quality. Benchmarking results in higher-quality products or services that improve customer satisfaction. When your benchmark is to host four community events every year, for example, you’re setting the standard to interact with your customers more regularly.

Here’s how setting benchmarks can help you

Setting benchmarks is simple, but it’s a process. Before you begin, collect relevant benchmarking data to use for your comparison. This can be data on your competitors, previous work, or your goals. The metrics from this data collection will be the baseline for your benchmark analysis.

For example, let’s say you’re tracking product launches. You find it takes you three months to go from ideation to launch. That timing might sound long or short to you depending on your perception, but perception isn’t an accurate way to track if this is the best timing for your launch. Instead, you can use benchmarks to answer how long each product launch should take. For example, you might look into:

How long does it take your competitor to launch a similar product? 

How long did the last product launch take?

Have we made improvements to our processes that will save us time during this launch?

Note that other details will come into play here as well. Your competitor might launch a product faster than you, but that’s not relevant if their team and budget is double your size. Consider how relevant your benchmark is to your current situation and choose one that makes the most sense for each specific scenario.

Types of benchmarking

There are three different types of benchmarking: internal, competitive, and strategic. The type you use will depend on what you’re measuring, and what you’d like to get out of it.

Internal benchmarking

If you’re new to benchmarking, internal benchmarking is the easiest benchmark to start with. Like other parts of project management, internal benchmarking uses organizational knowledge to answer questions . Also, because you’re reviewing internal information, data collection is entirely within your control. For internal benchmarking, review business performance indicators for other departments, teams, or even previous work. Look for best practices or effective processes you can apply to your current work.

You can collect internal information by:

Using questionnaires and asking colleagues or direct reports what they achieved, and how.

Reviewing past projects and looking for business processes that have given your company a competitive advantage.

Studying high-impact initiatives —what made them work? If you can reuse and standardize the processes or best practices that made these projects successful, then these are good candidates for performance benchmarking.

Looking at previous goals to see if your work matches your expectations.

As you collect this information, take note of any desired results (i.e., processes you’d want to replicate and that could become a standard). At the same time, keep an eye out for any performance gaps—the difference between your actual performance and what you intended. This process is similar to a gap analysis , which compares your current performance to your desired performance. Except with internal benchmarking, you’re comparing your current performance to past performances or other team's performances. 

Once you've identified what’s worked and what hasn’t, you can end the internal review, benchmarking the processes and workflows you’d like to standardize. 

Competitive benchmarking

This is the flip-side of an internal review, where you look outwards to the results from other companies in your industry. Competitive benchmarking is trickier, because it’s harder to find reliable data. You need to rely on your competitors to share information, or get data from a third party which you may not be able to verify.

But it’s worth figuring out. Once you get past the collection hurdles, competitive research is one of the best ways to gain a competitive edge. It helps you spot patterns or themes common to your industry, which you can use for benchmarking and overall process improvement .

For example, let’s say you discover that a direct competitor gets more social media engagement than your accounts. Using this information, you can set a benchmark for your own company’s social media engagement. In short, you’re deciding what social media performance metrics your company should hit to stay competitive in your industry.

Strategic benchmarking

Sometimes you know something isn’t working, but you can’t seem to figure out why. Maybe you’re just in a problem-solving rut, or maybe you’re expanding into new markets and developing an entirely new way of working. Strategic benchmarking is a creative way to stretch beyond industry knowledge. For strategic benchmarking, you’re looking for best-in-class performance. Often, this means looking to other companies, industries, or even cultures to see if you can create a new strategic benchmark for your work.

Strategic benchmarking has been used throughout history to foster innovation. For example, when an escalator company moved to shopping malls, it had to solve the problem of helping people to rise quickly and steeply, against gravity. It was unprecedented in their industry, so they looked outwards. In the end, they used techniques from the mining industry to design mall escalators. When done successfully like this, strategic benchmarking can catapult you well beyond the competition. 

The 8 steps in the benchmarking process

By following these eight simple steps, you can use benchmarking to make continuous improvements to your workflows and processes. 

Decide what you’re benchmarking. Determine what you’d like to benchmark. If you’re new to benchmarks, start by creating benchmarks for projects, processes, or desired results that have the highest impact on your work.  

Decide your benchmarking type. In other words, determine if your data will come from competitive, internal, or strategic benchmarking.

Review and record. Look at what you’re creating the benchmark for. Record all related processes and document related workflows so you have a good idea of where you are now, before you start. 

Collect data. Depending on the type of benchmarking, data could come from competitor research or internal data. When using competitive research, be careful with secondary information on competitors (i.e. from websites or news articles) which can be hard to fact check.

Analyze data. Measure data against your own performance or work to identify gaps, patterns, and opportunities for improvement.

Make a plan. Data won’t do much on its own. Once you have a full analysis, use project planning to decide how you’re going to set and use these benchmarks.

Implement changes. Now, you can move into the project management stage to fully implement your benchmarks and create a new standard for your work, team, and company.

Rinse and repeat. Benchmarking is an ongoing process, but it’s specific to each new idea or workflow. Restart the process from the beginning for every new project.

Benchmarking sets the gold standard

Benchmarking processes, workflows, and results gives you a baseline for measuring your success. Benchmarks clarify expectations and let your team know how they can produce the best results.  

Benchmarking helps you to find the work standards that push your business, but it won’t help you get that work done. With Asana, you can track, automate, and build out workflows that let you do better work, faster. 

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Smart Benchmarking Starts with Knowing Whom to Compare Yourself To

  • Raul Valdes-Perez

case study benchmark analysis

How to avoid tunnel vision.

Comparing your organization to peers – also known as benchmarking – lets you understand how you’re doing, identify performance gaps and opportunities to improve, and highlight peer achievements that you could emulate, or your own achievements to be celebrated.

case study benchmark analysis

  • Raul Valdes-Perez is CEO and co-founder of OnlyBoth Inc. and earlier co-founded and was CEO of Vivísimo Inc., acquired by IBM in 2012. He has a PhD. in computer science from Carnegie Mellon, where his advisor was the late Herbert A. Simon, and was also on its research faculty.

Partner Center

Combs Advisory Services Logo

Case Study: Best Practice and Benchmark Analysis

case study benchmark analysis

Intervention: They recognized that existing faculty expertise, the ability to serve as a convener and their geographic location could form a foundation to build upon…but they needed to know how they compared to others already in the space. Members of the leadership team decided to explore the viability of creating a CoE on this topic by completing a fact-based review and determine if there was a business case to support this hypothesis.

Impact: Wes completed a benchmark analysis that included a review of the current landscape, an evaluation of the characteristics of best-in-class institutions and measured the client against these criteria to identify areas of strength and gaps that could impact success. The holistic approach created a framework that defined what success would look like in today’s shifting social and political environment. The evaluation also looked at how proposed options had the capability to positively impact attracting students, faculty and staff as well as serve as a catalyst to increase donations. This comprehensive assessment demonstrated the client was well positioned to make valuable contributions to study of diversity and inclusion. It also provided a high-level implementation plan designed to deliver quick wins as well as achieve sustainable long term impact.

Developing Growth Strategies Leveraging Market Analysis and Competitive Benchmarking

Ramy Accad

A whiskey brand was embarking on a transformation journey to recognize its full potential in a growing market. While the brand was acquired by a major global spirits conglomerate several years ago, they have the opportunity to realize greater commercial and operational success in the market, especially when benchmarked against key competitors. In this market analysis and competitive benchmarking case study , we outline how Clarkston helped a spirits client develop its growth strategy.  

The whiskey category is highly competitive, with several major players owning a large portion of market share and new entrants continuing to disrupt the industry. The brand was seeking to understand what the market opportunity for their core spirits product and adjacent product innovation would look like in the future to inform their five-year strategic roadmap, including the potential for significant capex investments and/or asset divestitures in the future.    

Clarkston partnered with their client’s leadership team to perform a financial assessment of current operations and cost structure. Several financial models and scenarios were developed based on cost containment and ambitious growth scenarios illustrating the point at which the company would reach profitability. In doing so, Clarkston pressure tested historical and projected financials to identify areas of improvement toward gross margin , operating margin, and cashflow targets.  

Download the Market Analysis and Competitive Benchmarking Case Study Here

In addition, Clarkston conducted exhaustive research on whiskey and adjacent categories through an in-depth market landscape assessment to identify trends, performance drivers, and growth levers for the brand to explore, as well as completed a competitor benchmarking across various whiskey brands to analyze product mix, pricing , brand positioning and market share in the U.S.  

Furthermore, an evaluation of macro-consumer trends in the whiskey market was conducted to better understand potential consumer segment targets, as well as industry consumer focus groups and expert interviews led across national distributors and major spirit organizations to provide a strategic perspective on the market opportunity and considerations in achieving growth goals.  

As an outcome, the team delivered a set of strategic recommendations for the client’s executive leadership team in preparation for an upcoming board of directors meeting to finalize the brand’s growth strategy and investment profile for the next five years, including an evaluation of criteria and scenarios that could warrant a carveout.    

Download the Market Analysis and Competitive Benchmarking case study here.

Learn more about our strategy and innovation consulting or wine, spirits, and beer consulting services by contacting us below. .

Contact Us to Learn More

Related Insights

case study benchmark analysis

The Growing Market for Functional Alcoholic Beverages

In today’s ever-changing consumer landscape, the alcohol industry has shown fascinating transformations to meet the demands of shifting consumer tastes ...

case study benchmark analysis

Developing a Corporate DE+I Communications Plan for a Luxury Brand

Clarkston’s Diversity, Equity, and Inclusion Consulting Practice recently partnered with a global luxury brand to enhance their approach to internal ...

case study benchmark analysis

Data Governance Strategy & Implementation

Clarkston recently partnered with a top wine and spirits company to execute their data governance strategy by supporting the creation ...

Subscribe to All Posts

  • Life Sciences
  • Consumer Products

You may unsubscribe from these communications at any time.

  • Phone This field is for validation purposes and should be left unchanged.

This site uses cookies to improve your experience. By viewing our content, you are accepting the use of cookies. To help us insure we adhere to various privacy regulations, please select your country/region of residence. If you do not select a country we will assume you are from the United States. View our privacy policy and terms of use.

  • Employee Benefits
  • Change Management
  • Talent Acquisition
  • Applicant Tracking Systems

Remove

15 HR Analytics Case Studies with Business Impact

Analytics in HR

NOVEMBER 5, 2018

For this article, I have collected 15 of the best HR analytics case studies I’ve come across in the past two years. Each of these case studies are connected with a concrete business impact. For each case study , I will refer to their original publication. 15 HR Analytics Case Studies .

case study benchmark analysis

Unlocking the Secrets of Effective Marketing: The Ultimate Guide to Assessing Candidate Skills

Professional Alternatives

JULY 10, 2023

Understanding Marketing Skill Benchmarks And Job Qualifications In order to effectively assess candidate skills, it is important for employers to have a clear understanding of marketing skill benchmarks and job qualifications. These include behavioral interviews, case studies , skills assessments, and work samples.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

  • Delivering DEI Training That Drives Real Change
  • AI & DEI: With Great Opportunities Comes Great HR Responsibility
  • Bridging the Gap: The Intersection of DEI Initiatives and Employee Benefits
  • Breaking the Burnout Cycle: Empowering Managers for Excellence
  • From Awareness to Action: An HR Guide to Making Accessibility Accessible

MORE WEBINARS

Trending Sources

  • ClearCompany HRM
  • The People Equation

article thumbnail

Case Study: The Value Of Pay Transparency And How To Implement It

HR Tech Girl

JULY 5, 2023

Here I aim to shed light on what pay transparency looks like at Compt, explain its mechanics and influence on overall compensation structures and raises, present real-world examples of its benefits, and provide practical considerations for organizations contemplating this approach. This will also help avoid resentment across seniority levels.

article thumbnail

Background Screening Impacts the Candidate Experience

HR Bartender

SEPTEMBER 22, 2015

Did you know that HireRight offers an extensive resource library with case studies and checklists? In HireRight’s 2015 Employment Screening Benchmark Report , 51 percent of organizations said that finding and retaining talent was their top business challenge. You can check it out here. Enjoy the post!). So how do we do that?

article thumbnail

Navigating Uncertainty: The Strategic Imperative of Investing in People and HR Tech

FEBRUARY 7, 2024

Case Studies of UAE Businesses Succeeding with HR Tech Investments The dynamic landscape of the UAE business scene demands robust HR strategies to attract, retain, and empower top talent. Bayzat’s success transcends individual companies, setting a benchmark for HR innovation in the UAE.

article thumbnail

Experts Share 4 Tips for People Analytics Success

APRIL 21, 2021

According to Nigel, there is a general sharing that happens in the people analytics community, and tapping into professional networks can help you find more examples of what works in practice. On this front, Nigel recommends asking a few key questions when consuming case study content: 3. Benchmark strategically.

article thumbnail

HR Business Partner Resources Repository

AUGUST 20, 2021

HR Business Partner Benchmarking Report The HR business partner role is evolving. The rise (and fall) of HR analytics: a study into the future applications, value, structure, and system support. Case Study : How we Determined Optimal Staffing Levels. Case study : Key Drivers of Retail Sales Performance.

article thumbnail

People Analytics and HR-Tech Reading List

Littal Shemer

OCTOBER 11, 2022

It will also introduce machine learning and where it fits within the larger HR Analytics framework” Handbook of Regression Modeling in People Analytics: With Examples in R and Python Keith McNulty (2021). It covers key questions: Where to find data in an organization? How to collect and analyze it?

article thumbnail

The Complete Guide on Presenteeism (w. Example Intervention)

Digital HR Tech

JANUARY 29, 2020

To give an example , would you rather have someone be absent and not work, or be present and work less effectively? In this example , there is definitely a case of presenteeism as the employee will be less effective when at work – but it is still better than him or her being fully absent. An example . 8 ( Koopman et al.,

article thumbnail

How To Leverage AI To Enhance Customer Loyalty in 2024

FEBRUARY 16, 2024

For example , users shopping around your website may receive product suggestions matching their preferences and past purchases. Here’s an example of personalized recommendations on Amazon. Spotify's annual Wrapped is a famous example of a personalized loyalty program that uses AI.

article thumbnail

Yield Ratio: All You Need to Know

APRIL 7, 2021

An example of how a recruitment yield pyramid would look in practice can be seen below: You may have heard the term yield ratio in the finance field. For example , you could assess the effectiveness of a particular job board by calculating a yield ratio. The ratio measures movements between each stage (e.g.,

article thumbnail

A Literal THESIS on The P&L Impact of Candidate Experience

MAY 17, 2019

By this point, nearly every talent acquisition leader concerned with candidate experience is familiar with the Virgin Media case study detailing huge potential losses from poor candidate experience. I will note here that Survale clients can automatically benchmark their performance against Talent Board data.

article thumbnail

Employee listening in the Intelligence Age: It’s a new era

HRExecutive

OCTOBER 24, 2023

Pay and benefits Deutsche Telekom used design thinking to tailor executive benefits programs, rather than just benchmarking or assuming to know what benefits would be most useful for their senior leadership. For example , if somebody asked, “What are my colleagues paid?”, 31), where we’ll unpack these insights.

article thumbnail

11 HR Analytics Courses Online

AUGUST 12, 2019

All subjects are illustrated by many real-life examples of HR analytics. R goes further than the traditional tools that are used for HR data benchmarking and analysis, like Microsoft Excel, Access, and SPSS. Examples of machine learning algorithms include decision trees, Bayes, simple rules, clustering, and meta-classifiers.

article thumbnail

How Data Cleansing Can Streamline Your HR Analytics

For example , imagine a scenario where an HR team is analyzing employee turnover rates. This enables them to track progress, set benchmarks , and measure the impact of HR initiatives accurately. Case Study of Successful Data Cleansing in HR Real-world examples demonstrate the power of data cleanup in HR analytics.

article thumbnail

Services That Help with Incentive Plan Design

The Incentive Solutions News blog

SEPTEMBER 21, 2021

Here are some of the services that can help you plan a superior incentive program: Goal-Setting & Benchmarks . Do you need to achieve a 50% increase in warranty registration submissions by the end of Q2, for example ? What benchmarks should the program hit to be on-track for success? Incentive Marketing Services.

article thumbnail

New Ebook: How to Build a Loyalty Program for Distributors

MARCH 22, 2021

Our eBooks, FAQ’s and case studies are packed with information about deploying an incentive program that best suits your needs. The guide’s suggestions are supported by research, studies , and examples of successful loyalty programs run by real-life manufacturers. Get the latest in industry news and insights.

article thumbnail

Hotel Giant IHG gained a 97% positive applicant experience using predictive people analytics – Can these along with other benefits be easily achieved by other companies?

FEBRUARY 14, 2019

He uses added insight from our recent case study with hotel giant IHG who recently applied AI in HR with Cognisess. . In your opinion, what element of the IHG case study was particularly successful? “On This avoided the need for applicants to undertake the usual case study module on day 2 of the assessment centre.

article thumbnail

Strategic Workforce Planning 101: Framework & Process

DECEMBER 8, 2023

Benefits of strategic workforce planning Strategic workforce planning framework Strategic workforce planning case studies Strategic workforce planning process Strategic workforce planning tools Best practices for strategic workforce planning FAQ What is strategic workforce planning? An example of such an indicator is new product leads.

article thumbnail

New Ebook: Why Use Incentive Services?

JULY 21, 2021

Our eBooks, FAQ’s and case studies are packed with information about deploying an incentive program that best suits your needs. For example : Goal-setting assistance. Benchmark data that can help assess a program’s successfulness. Get the latest in industry news and insights.

article thumbnail

What is Time to Fill? Everything You Need to Know About This Recruiting Metric

MAY 18, 2021

As an example , let’s say you are replacing a senior economist. For example , some roles are filled faster with an agency, whereas other roles are best filled with a LinkedIn job ad. You can also make further calculations to measure the effectiveness of your recruitment process on various benchmarks .

article thumbnail

The Essential Plan Every Manager Should Follow: Balancing Employee Growth and Achieving Operational Excellence

SEPTEMBER 18, 2023

From the manager’s core responsibilities to strategic approaches , case studies , and the multitude of benefits and challenges, we’ll unveil how to make OKRs shine bright as a guiding light in this critical journey. This inspires and sets a benchmark for their team to focus their efforts towards achieving company objectives.

article thumbnail

Be careful! These books can change your career: People Analytics and HR-Tech reading list

MAY 14, 2018

The book covers the full People Analytics scope (Benefits, Compensation, Culture, Diversity & Inclusion, Engagement, Leadership, Learning & Development, Personality Traits, Performance Management, Recruitment, Sales Incentives) with numerous real-world examples , and shows how R can help”. Ben Eubanks (2018). Bernard Marr (2018). “A

article thumbnail

Reducing Absenteeism in a Mid-Sized Organization: A Case Study

JULY 6, 2020

“ I will walk you through the steps we took: Establish a specific benchmark ; Gather theories and data for analysis ; Iteratively run the analyses ; Collectively interpret the results and decide on targeted action. Establish a specific benchmark . In fact, their absenteeism rates were doubled compared to the benchmark !

article thumbnail

360 Feedback Software: How to Maximize Results

JUNE 29, 2023

Real-World Case Studies Understanding 360 Feedback Software The traditional process of gathering 360 feedback requires a lot of legwork. Download Now: Free 360 Reviews Template [Get Your Copy] Good vs. Bad 360 Review Questions Here are a few examples of bad vs. good questions to ask. Understanding 360 Feedback Software 2.

article thumbnail

What is HR Analytics?

JUNE 24, 2023

This type of analytics is commonly used for reporting, benchmarking , and monitoring HR metrics. HR chatbots, sentiment analysis, and predictive modeling are just a few examples of how AI and ML are transforming HR analytics. Case Study 2: Enhancing Employee Engagement XYZ Inc.,

article thumbnail

6 Strategies for Leading Remote Teams in SaaS Companies [2024]

DECEMBER 7, 2023

  Let’s take a look at some ways of leading remote teams and case studies of where they are applied.  Toptal is a good example of how remote-friendly teams establish clear goals and then create a collaborative environment with tools like Zoom and Slack. Build benchmarks and ways to achieve those benchmarks . 

article thumbnail

New series offers guidance on talent acquisition

FEBRUARY 6, 2020

They also feature case studies showcasing real-world examples to put into action, according to Sue Marks, CEO and founder of Cielo. In addition, the availability of benchmarking data facilitates more accurate goal setting. Also see: How to prepare your recruiting team for 2020.

article thumbnail

Create An Irresistible Work Culture, Not A 'Best Place To Work'

Human Workplaces

JUNE 9, 2019

The standard within these contests is either employee satisfaction or comparison to a third-party benchmark , but both of these methods could be completely missing the mark on what makes your particular organization deeply successful. For example , many would argue that a flexible workplace is a component of a best place to work.

article thumbnail

The Power of Partnership, Navigating the Maze of HR Software Providers

FEBRUARY 12, 2019

Two trends stand out: a high degree of fragmentation (hundreds of vendors offer Recruiting solutions, for example ) and the rapid introduction of new concepts – artificial intelligence, machine learning, IoT, employee social networking, and the gig economy.

article thumbnail

How To Build a Learning Culture: You Asked, We Answered

JULY 28, 2021

Degreed: Creating a learning culture starts with the culture itself and the example set by leadership. To help late adopters, we created several assets that illustrate the benefits of switching to a new way of learning, such as video success stories, written case studies , and participant testimonials.

article thumbnail

Upskilling and reskilling the workforce for an uncertain future

JANUARY 5, 2023

Technical skills are a good example here, as even non-technical roles require increasing levels of. So good project managers can move across departments, for example . And in some cases , rapid reskilling of employees will help to avoid redundancy in the longer term. Case study : Jardine Motors Group.

article thumbnail

Cultural Sensitivity Training: The Bridge Between Employee Development and Inclusive Productivity

SEPTEMBER 20, 2023

Promote Diversity in Leadership: Strive for diverse representation in leadership positions to set an example and influence company culture positively. Measure and Track Progress : Establish metrics and benchmarks to track progress in fostering inclusivity, regularly assess your initiatives’ effectiveness, and make adjustments as needed.

article thumbnail

AI Recruiting Tools May Be the Future, But Proceed With Caution

APRIL 28, 2023

The following case studies —alongside some AI tools that are helping companies like Enspira HR employ diverse hiring practices—paint the whole picture of the who, what, when, where, and why of AI recruitment tools, and how companies can ensure AI boosts both efficiency and equitable hiring decisions. The problem?

article thumbnail

How to Convert a Disengaged Employee Into An Engaged One

The action you take will depend on the feedback you receive but here are some examples of ways you might tackle engagement challenges: Frequently Monitor Engagement How often do you monitor employee engagement? For example , if an employee needs more role definition and clarity, take the time to help them review their work priorities.

article thumbnail

” The action you take will depend on the feedback you receive but here are some examples of ways you might tackle engagement challenges: Frequently Monitor Engagement. For example , if an employee needs more role definition and clarity, take the time to help them review their work priorities. Personalize Your Actions.

article thumbnail

Quadruple Your B2B Lead Generation Results in 2021

JANUARY 26, 2021

Cook up a tasty case study . A case study is every salesperson’s go-to collateral. Case studies build trust and act as social proof for how your product will work and impact a future lead’s buying decision. How do you create case studies for B2B lead generation? Determine industry benchmarks .

article thumbnail

TIAA's Journey of Crafting the Right Performance Management Solution For Its Culture

APRIL 26, 2016

The problem is that as we search for the perfect performance management solution, we as human resources professionals and social scientists are swimming in research, benchmarking data, and case studies that are pointing us in completely contradictory directions. Download the case study now (i4cp members only)

article thumbnail

The Business Case for Human Resources

OCTOBER 27, 2016

They can help develop effective managers who increase revenue, understand how to keep essential people on board, and they can carry useful learnings from one department to another—just to name a few examples . Another lens to examine metrics through is by considering industry benchmarks and the activities of other peer organizations.

The ROI on L&D: How Tenaris Saw a Return of 119% After One Year of Degreed

OCTOBER 29, 2019

They started by benchmarking their current strategy, developing a new and modernized platform, building a seamless implementation plan, and they were able to quantify their results. Download their latest ROI case study here! That is exactly what Tenaris was able to do after one year of using Degreed.

article thumbnail

57 Stellar Ways to Reward Employees for Good Performance & Acknowledging Employee Dedication

SEPTEMBER 27, 2023

These bonuses are typically tied to specific performance metrics or achievements, such as meeting sales targets, exceeding project goals, or surpassing customer satisfaction benchmarks . For example : Google is known for its generous performance bonuses. These bonuses can be one-time payments or part of a structured incentive program.

Bet Big on These HR Tech Conference 2017 Sessions

SEPTEMBER 25, 2017

Elaine Orler of Talent Function Group will share important talent acquisition trends, valuable solutions, and real-world examples of how inventive organizations are recruiting and hiring today – and will be in the future. Hiring the right talent is key to succeeding in business.

Stay Connected

Join 398,000+ Insiders by signing up for our newsletter

  • Participate in Human Resources Today
  • 2019 Human Resources Today Summer Reading List
  • Stay At Home Reading List
  • Add a Source
  • Add a Resource
  • See All 
  • 2018 Human Resources Today MVP Awards
  • 2017 Human Resources Today MVP Awards
  • 2019 Human Resources Today MVP Awards
  • 2020 Human Resources Today MVP Awards
  • 2021 Human Resources Today MVP Awards
  • 2022 Human Resources Today MVP Awards
  • Tue. Feb 20
  • Mon. Feb 19
  • Sun. Feb 18
  • Sat. Feb 17
  • Feb 10 - Feb 16
  • Employee Engagement
  • Onboarding Software
  • Talent Management
  • Performance Management
  • Time and Attendance
  • More Topics 

LinkedIn

Input your email to sign up, or if you already have an account, log in here!

Enter your email address to reset your password. a temporary password will be e‑mailed to you., be in the know on.

case study benchmark analysis

Human Resources Today

Expert insights. Personalized for you.

We organize all of the trending information in your field so you don't have to. Join 398,000+ users and stay up to date on the latest articles your peers are reading.

case study benchmark analysis

Get the good stuff

Subscribe to the following Human Resources Today newsletters:

You must accept the Privacy Policy and Terms & Conditions to proceed.

More

You know about us, now we want to get to know you!

Check your mail, we've sent an email to . please verify that you have received the email..

We have resent the email to

Let's personalize your content

Use social media to find articles.

We can use your profile and the content you share to understand your interests and provide content that is just for you.

Turn this off at any time. Your social media activity always remains private.

Let's get even more personalized

Choose topics that interest you., so, what do you do.

Are you sure you want to cancel your subscriptions?

Cancel my subscriptions

Don't cancel my subscriptions

Changing Country?

Accept terms & conditions.

It looks like you are changing your country/region of residence. In order to receive our emails, you must expressly agree. You can unsubscribe at any time by clicking the unsubscribe link at the bottom of our emails.

You appear to have previously removed your acceptance of the Terms & Conditions.

More

We noticed that you changed your country/region of residence; congratulations! In order to make this change, you must accept the Aggregage Terms and Conditions and Privacy Policy. Once you've accepted, then you will be able to choose which emails to receive from each site .

You must choose one option

Please choose which emails to receive from each site .

  • Update All Sites
  • Update Each Site

Please verify your previous choices for all sites

Sites have been updated - click Submit All Changes below to save your changes.

We recognize your account from another site in our network , please click 'Send Email' below to continue with verifying your account and setting a password.

You must accept the Privacy Policy and Terms & Conditions to proceed.

This is not me

Discover the latest MyICAEW app for ACA students and members, available to download now. Find out more

  • Benefits of membership

Gain access to world-leading information resources, guidance and local networks.

  • Visit Benefits of membership

Becoming a member

98% of the best global brands rely on ICAEW Chartered Accountants.

  • Visit Becoming a member
  • Pay fees and subscriptions

Your membership subscription enables ICAEW to provide support to members.

Fees and subscriptions

Member rewards.

Take advantage of the range of value added or discounted member benefits.

  • Member rewards – More from your membership
  • Technical and ethics support
  • Support throughout your career

Information and resources for every stage of your career.

Member Insights Survey

Let us know about the issues affecting you, your business and your clients.

  • Complete the survey

From software start-ups to high-flying airlines and high street banks, 98% of the best global brands rely on ICAEW Chartered Accountants. A career as an ICAEW Chartered Accountant means the opportunity to work in any organisation, in any sector, whatever your ambitions.

Everything you need to know about ICAEW annual membership fees, community and faculty subscriptions, eligibility for reduced rates and details of how you can pay.

Membership administration

Welcome to the ICAEW members area: your portal to members'-only content, offers, discounts, regulations and membership information.

  • Continuing Professional Development (CPD)

Continuing Professional Development (CPD) is an integral part of being a successful ICAEW Chartered Accountant.

The ICAEW Chartered Accountant qualification, the ACA, is one of the most advanced learning and professional development programmes available. It is valued around the world in business, practice and the public sector.

3 people huddled at desk

ACA for employers

Train the next generation of chartered accountants in your business or organisation. Discover how your organisation can attract, train and retain the best accountancy talent, how to become authorised to offer ACA training and the support and guidance on offer if you are already providing training.

Digital learning materials via BibliU

All ACA, ICAEW CFAB and Level 4 apprenticeship learning materials are now digital only. Read our guide on how to access your learning materials on the ICAEW Bookshelf using the BibliU app or through your browser.

  • Find out more

Take a look at ICAEW training films

Focusing on professional scepticism, ethics and everyday business challenges, our training films are used by firms and companies around the world to support their in-house training and business development teams.

Attract and retain the next generation of accounting and finance professionals with our world-leading accountancy qualifications. Become authorised to offer ACA training and help your business stay ahead.

CPD guidance and help

Continuing Professional Development (CPD) is an integral part of being a successful ICAEW Chartered Accountant. Find support on ICAEW's CPD requirements and access resources to help your professional development.

Leadership Development Programmes

ICAEW Academy’s in-depth leadership development programmes take a holistic approach to combine insightful mentoring or coaching, to exclusive events, peer learning groups and workshops. Catering for those significant transitions in your career, these leadership development programmes are instrumental to achieving your ambitions or fulfilling your succession planning goals.

Specialist Finance Qualifications & Programmes

Whatever future path you choose, ICAEW will support the development and acceleration of your career at each stage to enhance your career.

 Young people

Why a career in chartered accountancy?

If you think chartered accountants spend their lives confined to their desks, then think again. They are sitting on the boards of multinational companies, testifying in court and advising governments, as well as supporting charities and businesses from every industry all over the world.

  • Why chartered accountancy?

 Telescope

Search for qualified ACA jobs

Matching highly skilled ICAEW members with attractive organisations seeking talented accountancy and finance professionals.

Volunteering roles

Helping skilled and in-demand chartered accountants give back and strengthen not-for-profit sector with currently over 2,300 organisations posting a variety of volunteering roles with ICAEW.

  • Search for volunteer roles
  • Get ahead by volunteering

Advertise with ICAEW

From as little as £495, access to a pool of highly qualified and ambitious ACA qualified members with searchable CVs.

Early careers and training

Start your ACA training with ICAEW. Find out why a career in chartered accountancy could be for you and how to become a chartered accountant.

Qualified ACA careers

Find Accountancy and Finance Jobs

Voluntary roles

Find Voluntary roles

While you pursue the most interesting and rewarding opportunities at every stage of your career, we’re here to offer you support whatever stage you are or wherever you are in the world and in whichever sector you have chosen to work.

ACA students

"how to guides" for aca students.

  • ACA student guide
  • How to book an exam
  • How to apply for credit for prior learning (CPL)
  • ACA student induction webinar

Exam resources

Here are some resources you will find useful while you study for the ACA qualification.

  • Certificate Level
  • Professional Level
  • Advanced Level

Digital learning materials

All ACA learning materials are now digital only. Read our guide on how to access your learning materials on the ICAEW Bookshelf via the BibliU app, or through your browser.

  • Read the guide

My online training file

Once you are registered as an ACA student, you'll be able to access your training file to log your progress throughout ACA training.

  • Access your training file
  • Student Insights

Fresh insights, innovative ideas and an inside look at the lives and careers of our ICAEW students and members.

  • Read the latest articles

System status checks

Getting started.

Welcome to ICAEW! We have pulled together a selection of resources to help you get started with your ACA training, including our popular 'How To' series, which offers step-by-step guidance on everything from registering as an ACA student and applying for CPL, to using your online training file.

Credit for prior learning (CPL)

Credit for prior learning or CPL is our term for exemptions. High quality learning and assessment in other relevant qualifications is appropriately recognised by the award of CPL.

Apply for exams

What you need to know in order to apply for the ACA exams.

The ACA qualification has 15 modules over three levels. They are designed to complement the practical experience you will be gaining in the workplace. They will also enable you to gain in-depth knowledge across a broad range of topics in accountancy, finance and business. Here are some useful resources while you study.

  • Exam results

You will receive your results for all Certificate Level exams, the day after you take the exam and usually five weeks after a Professional and Advanced Level exam session has taken place. Access your latest and archived exam results here.

Training agreement

Putting your theory work into practice is essential to complete your ACA training.

Student support and benefits

We are here to support you throughout your ACA journey. We have a range of resources and services on offer for you to unwrap, from exam resources, to student events and discount cards. Make sure you take advantage of the wealth of exclusive benefits available to you, all year round.

  • Applying for membership

The ACA will open doors to limitless opportunities in all areas of accountancy, business and finance anywhere in the world. ICAEW Chartered Accountants work at the highest levels as finance directors, CEOs and partners of some of the world’s largest organisations.

ACA training FAQs

Do you have a question about the ACA training? Then look no further. Here, you can find answers to frequently asked questions relating to the ACA qualification and training. Find out more about each of the integrated components of the ACA, as well as more information on the syllabus, your training agreement, ICAEW’s rules and regulations and much more.

  • Anti-money laundering

Guidance and resources to help members comply with their legal and professional responsibilities around AML.

Technical releases

ICAEW Technical Releases are a source of good practice guidance on technical and practice issues relevant to ICAEW Chartered Accountants and other finance professionals.

  • ICAEW Technical Releases
  • Thought leadership

ICAEW's Thought Leadership reports provide clarity and insight on the current and future challenges to the accountancy profession. Our charitable trusts also provide funding for academic research into accountancy.

  • Academic research funding

Technical Advisory Services helpsheets

Practical, technical and ethical guidance highlighting the most important issues for members, whether in practice or in business.

  • ICAEW Technical Advisory Services helpsheets

Bloomsbury – free for eligible firms

In partnership with Bloomsbury Professional, ICAEW have provided eligible firms with free access to Bloomsbury’s comprehensive online library of over 60 titles from leading tax and accounting subject matter experts.

  • Bloomsbury Accounting and Tax Service

Country resources

Our resources by country provide access to intelligence on over 170 countries and territories including economic forecasts, guides to doing business and information on the tax climate in each jurisdiction.

Industries and sectors

Thought leadership, technical resources and professional guidance to support the professional development of members working in specific industries and sectors.

Audit and Assurance

The audit, assurance and internal audit area has information and guidance on technical and practical matters in relation to these three areas of practice. There are links to events, publications, technical help and audit representations.

The most up-to-date thought leadership, insights, technical resources and professional guidance to support ICAEW members working in and with industry with their professional development.

  • Corporate Finance

Companies, advisers and investors making decisions about creating, developing and acquiring businesses – and the wide range of advisory careers that require this specialist professional expertise.

  • Corporate governance

Corporate governance is the system by which companies are directed and controlled. Find out more about corporate governance principles, codes and reports, Board subcommittees, roles and responsibilities and shareholder relations. Corporate governance involves balancing the interests of a company’s many stakeholders, such as shareholders, employees, management, customers, suppliers, financiers and the community. Getting governance right is essential to build public trust in companies.

Corporate reporting

View a range of practical resources on UK GAAP, IFRS, UK regulation for company accounts and non-financial reporting. Plus find out more about the ICAEW Corporate Reporting Faculty.

Expert analysis on the latest national and international economic issues and trends, and interviews with prominent voices across the finance industry, alongside data on the state of the economy.

  • Financial Services

View articles and resources on the financial services sector.

  • Practice resources

For ICAEW's members in practice, this area brings together the most up-to-date thought leadership, technical resources and professional guidance to help you in your professional life.

Public Sector

Many ICAEW members work in or with the public sector to deliver public priorities and strong public finances. ICAEW acts in the public interest to support strong financial leadership and better financial management across the public sector – featuring transparency, accountability, governance and ethics – to ensure that public money is spent wisely and that public finances are sustainable.

Sustainability and climate change

Sustainability describes a world that does not live by eating into its capital, whether natural, economic or social. Members in practice, in business and private individuals all have a role to play if sustainability goals are to be met. The work being undertaken by ICAEW in this area is to change behaviour to drive sustainable outcomes.

The Tax area has information and guidance on technical and practical tax matters. There are links to events, the latest tax news and the Tax Faculty’s publications, including helpsheets, webinars and Tax representations.

Keep up-to-date with tech issues and developments, including artificial intelligence (AI), blockchain, big data, and cyber security.

Trust & Ethics

Guidance and resources on key issues, including economic crime, business law, better regulation and ethics. Read through ICAEW’s Code of Ethics and supporting information.

Communities

Polaroids on pinboard

ICAEW Communities

Information, insights, guidance and networking opportunities on a range of industry sectors, professional specialisms and at various stages throughout your career.

  • Discover a new community

Faculties

ICAEW Faculties

The accountancy profession is facing change and uncertainty. The ICAEW Faculties can help by providing you with timely and relevant support.

  • Choose to join any of the faculties

UK groups and societies

We have teams on the ground in: East of England, the Midlands, London and South East, Northern, South West, Yorkshire and Humberside, Wales and Scotland.

  • Access your UK region

Worldwide support and services

Support and services we offer our members in Africa, America, Canada, the Caribbean, Europe, Greater China, the Middle East, Oceania and South East Asia.

  • Discover our services

ICAEW Faculties are 'centres of technical excellence', strongly committed to enhancing your professional development and helping you to meet your CPD requirements every year. They offer exclusive content, events and webinars, customised for your sector - which you should be able to easily record, when the time comes for the completion of your CPD declaration. Our offering isn't exclusive to Institute members. As a faculty member, the same resources are available to you to ensure you stay ahead of the competition.

Communities by industry / sector

Communities by life stage and workplace, communities by professional specialism, local groups and societies.

We aim to support you wherever in the world you work. Our regional offices and network of volunteers run events and provide access to local accounting updates in major finance centres around the globe.

  • Ukraine crisis: central resource hub

Learn about the actions that ICAEW members are taking to ensure that their clients comply with sanctions imposed by different countries and jurisdictions, and read about the support available from ICAEW.

Insights pulls together the best opinion, analysis, interviews, videos and podcasts on the key issues affecting accountancy and business.

  • See the latest insights
  • Making COP count

This series looks at the role the accountancy profession can play in addressing the climate crisis and building a sustainable economy.

  • Read more on COP28

Professional development and skills

With new requirements on ICAEW members for continuing professional development, we bring together resources to support you through the changes and look at the skills accountants need for the future.

  • Visit the hub

When Chartered Accountants Save The World

Find out how chartered accountants are helping to tackle some of the most urgent social challenges within the UN Sustainable Development Goals, and explore how the profession could do even more.

  • Read our major series

Insights specials

A listing of one-off Insights specials that focus on a particular subject, interviewing the key people, identifying developing trends and examining the underlying issues.

Top podcasts

Insights by topic.

Regulation graphic

ICAEW Regulation

Regulation graphic

  • Regulatory News

View the latest regulatory updates and guidance and subscribe to our monthly newsletter, Regulatory & Conduct News.

  • Regulatory Consultations

Strengthening trust in the profession

Our role as a world-leading improvement regulator is to strengthen trust and protect the public. We do this by enabling, evaluating and enforcing the highest standards in the profession. 

Regulatory applications

Find out how you can become authorised by ICAEW as a regulated firm. 

ICAEW codes and regulations

Professional conduct and complaints, statutory regulated services overseen by icaew, regulations for icaew practice members and firms, additional guidance and support, popular search results.

  • Practice Exam Software
  • Training File
  • Exam Results
  • Routes to the ACA
  • ACA students membership application
  • Join as a member of another body
  • How much are membership fees?
  • How to pay your fees
  • Receipts and invoices
  • What if my circumstances have changed?
  • Difficulties in making changes to your membership
  • Faculty and community subscription fees
  • Updating your details
  • Complete annual return
  • Promoting myself as an ICAEW member
  • Verification of ICAEW membership
  • Become a life member
  • Become a fellow
  • Request a new certificate
  • Report the death of a member
  • Membership regulations
  • New members
  • Career progression
  • Career Breakers
  • Volunteering at schools and universities
  • ICAEW Member App
  • Working internationally
  • Self employment
  • Support Members Scheme
  • CPD is changing
  • CPD learning resources
  • Your guide to CPD
  • Online CPD record
  • How to become a chartered accountant
  • Register as a student
  • Train as a member of another body
  • More about the ACA and chartered accountancy
  • How ACA training works
  • Become a training employer
  • Access the training file
  • Why choose the ACA
  • Training routes
  • Employer support hub
  • Get in touch
  • Apprenticeships with ICAEW
  • A-Z of CPD courses by topic
  • ICAEW Business and Finance Professional (BFP)
  • ICAEW flagship events
  • Financial Talent Executive Network (F-TEN®)
  • Developing Leadership in Practice (DLiP™)
  • Network of Finance Leaders (NFL)
  • Women in Leadership (WiL)
  • Mentoring and coaching
  • Partners in Learning
  • Board Director's Programme e-learning
  • Corporate Finance Qualification
  • Diploma in Charity Accounting
  • ICAEW Certificate in Insolvency
  • ICAEW Data Analytics Certificate
  • Financial Modeling Institute’s Advanced Financial Modeler Accreditation
  • ICAEW Sustainability Certificate for Finance Professionals
  • ICAEW Finance in a Digital World Programme
  • All specialist qualifications
  • Team training
  • Start your training
  • Improve your employability
  • Search employers
  • Find a role
  • Role alerts
  • Organisations
  • Practice support – 11 ways ICAEW and CABA can help you
  • News and advice
  • ICAEW Volunteering Hub
  • Support in becoming a chartered accountant
  • Vacancies at ICAEW
  • ICAEW boards and committees
  • Exam system status
  • ICAEW systems: status update
  • Changes to our qualifications
  • How-to guides for ACA students
  • ACA induction presentation
  • Apply for credits - Academic qualification
  • Apply for credits - Professional qualification
  • Credit for prior learning (CPL)/exemptions FAQs
  • Applications for Professional and Advanced Level exams
  • Applications for Certificate Level exams
  • Tuition providers
  • Latest exam results
  • Archived exam results
  • Getting your results
  • Marks feedback service
  • Marks review service
  • Training agreement: overview
  • Professional development
  • Ethics and professional scepticism
  • Practical work experience
  • Access your online training file
  • How training works in your country
  • Student rewards
  • TOTUM PRO Card
  • Student events and volunteering
  • Xero cloud accounting certifications
  • Student support
  • Join a community
  • Wellbeing support from caba
  • Code of ethics
  • Fit and proper
  • Level 4 Accounting Technician Apprenticeship
  • Level 7 Accountancy Professional Apprenticeship
  • AAT-ACA Fast Track FAQs
  • ACA rules and regulations FAQs
  • ACA syllabus FAQs
  • ACA training agreement FAQs
  • Audit experience and the Audit Qualification FAQs
  • Independent student FAQs
  • Practical work experience FAQs
  • Professional development FAQs
  • Six-monthly reviews FAQs
  • Ethics and professional scepticism FAQs
  • Greater China
  • Latin America
  • Middle East
  • North America
  • Australasia
  • Russia and Eurasia
  • South East Asia
  • Charity Community
  • Construction & Real Estate
  • Energy & Natural Resources Community
  • Farming & Rural Business Community
  • Forensic & Expert Witness
  • Global Trade Community
  • Healthcare Community
  • Internal Audit Community
  • Manufacturing Community
  • Media & Leisure
  • Portfolio Careers Community
  • Small and Micro Business Community
  • Small Practitioners Community
  • Travel, Tourism & Hospitality Community
  • Valuation Community
  • Audit and corporate governance reform
  • Audit & Assurance Faculty
  • Professional judgement
  • Regulation and working in audit
  • Internal audit resource centre
  • Everything business
  • Latest Business news from Insights
  • Strategy, risk and innovation
  • Business performance management
  • Financial management
  • Finance transformation
  • Economy and business environment
  • Leadership, personal development and HR
  • Webinars and publications
  • Business restructuring
  • The Business Finance Guide
  • Capital markets and investment
  • Corporate finance careers
  • Corporate Finance Faculty
  • Debt advisory and growth finance
  • Mergers and acquisitions
  • Private equity
  • Start-ups, scale-ups and venture capital
  • Transaction services
  • Board committees
  • Corporate governance codes and reports
  • Corporate Governance Community
  • Connect and Reflect
  • Principles of corporate governance
  • Roles, duties and responsibilities of Board members
  • Shareholder relations
  • Corporate reporting resources
  • Small and micro entity reporting
  • UK Regulation for Company Accounts
  • Non-financial reporting
  • Improving Corporate Reporting
  • Economy home
  • ICAEW Business Confidence Monitor
  • ICAEW Manifesto 2024
  • Energy crisis
  • Levelling up: rebalancing the UK’s economy
  • Resilience and Renewal: Building an economy fit for the future
  • Social mobility and inclusion
  • Autumn Statement 2023
  • Investment management
  • Inspiring confidence
  • Setting up in practice
  • Running your practice
  • Supporting your clients
  • Practice technology
  • TAS helpsheets
  • Support for business advisers
  • Join ICAEW BAS
  • Public Sector hub
  • Public Sector Audit and Assurance
  • Public Sector Finances
  • Public Sector Financial Management
  • Public Sector Financial Reporting
  • Public Sector Learning & Development
  • Public Sector Community
  • Latest public sector articles from Insights
  • Climate hub
  • Sustainable Development Goals
  • Accountability
  • Modern slavery
  • Resources collection
  • Sustainability Committee
  • Sustainability & Climate Change community
  • Sustainability and climate change home
  • Tax Faculty
  • Budgets and legislation
  • Business tax
  • Devolved taxes
  • Employment taxes
  • International taxes
  • Making Tax Digital
  • Personal tax
  • Property tax
  • Stamp duty land tax
  • Tax administration
  • Tax compliance and investigation
  • UK tax rates, allowances and reliefs
  • Artificial intelligence
  • Blockchain and cryptoassets
  • Cyber security
  • Data Analytics Community
  • Digital skills
  • Excel community
  • Finance in a Digital World
  • IT management
  • Technology and the profession
  • Trust & Ethics home
  • Better regulation
  • Business Law
  • Company law
  • Data protection and privacy
  • Economic crime
  • Help with ethical problems
  • ICAEW Code of Ethics
  • ICAEW Trust and Ethics team.....
  • Solicitors Community
  • Forensic & Expert Witness Community
  • Latest articles on business law, trust and ethics
  • Audit and Assurance Faculty
  • Corporate Reporting Faculty
  • Financial Services Faculty
  • Academia & Education Community
  • Construction & Real Estate Community
  • Entertainment, Sport & Media Community
  • Retail Community
  • Career Breakers Community
  • Black Members Community
  • Diversity & Inclusion Community
  • Women in Finance Community
  • Personal Financial Planning Community
  • Restructuring & Insolvency Community
  • Sustainability and Climate Change Community
  • London and East
  • South Wales
  • Yorkshire and Humberside
  • European public policy activities
  • ICAEW Middle East, Africa and South Asia
  • Latest news
  • Access to finance special
  • Attractiveness of the profession
  • Audit and Fraud
  • Audit and technology
  • Adopting non-financial reporting standards
  • Cost of doing business
  • Mental health and wellbeing
  • Pensions and Personal Finance
  • Public sector financial and non-financial reporting
  • More specials ...
  • The economics of biodiversity
  • How chartered accountants can help to safeguard trust in society
  • Video: The financial controller who stole £20,000 from her company
  • It’s time for chartered accountants to save the world
  • Video: The CFO who tried to trick the market
  • Video: Could invoice fraud affect your business?
  • Where next for audit and governance reform?
  • A taxing year ahead?
  • What can we expect from 2024?
  • COP28: making the business case for nature
  • COP28: what does transition planning mean for accountants?
  • What’s in the Economic Crime Act 2023?
  • ICAEW/CIPFA dual membership, cyber security trends, and economic renewal
  • How to build a workforce of the future
  • VAT exemptions for private schools and final CPD update
  • The PM’s pledges: where are we now?
  • Commercial property: economic bellwether or laggard?
  • More podcasts...
  • Top charts of the week
  • EU and international trade
  • CEO and President's insights
  • Diversity and Inclusion
  • Sponsored content
  • Insights index
  • Charter and Bye-laws
  • Complaints, disciplinary and fitness processes and regulations
  • Qualifications regulations
  • Training and education regulations
  • How to make a complaint
  • Guidance on your duty to report misconduct
  • Public hearings
  • What to do if you receive a complaint against you
  • Anti-money laundering supervision
  • Working in the regulated area of audit
  • Local public audit in England
  • Probate services
  • Designated Professional Body (Investment Business) licence
  • Consumer credit
  • Quality Assurance monitoring: view from the firms
  • The ICAEW Practice Assurance scheme
  • Licensed Practice scheme
  • Professional Indemnity Insurance (PII)
  • Clients' Money Regulations
  • Taxation (PCRT) Regulations
  • ICAEW training films
  • Helpsheets and guidance by topic
  • ICAEW's regulatory expertise and history
  • BPM tools, templates and case studies
  • Benchmarking guide

Benchmarking

Published: 02 Dec 2015 Updated: 18 May 2023 Update History

Introduction

Benchmarking in practice, examples of benchmarking, business process benchmarking, pitfalls to be avoided, bibliography, useful links, further reading.

The following BPM tool guide is one of a series produced for ICAEW by Professor Mike Bourne of Cranfield University.

Benchmarking is the process of comparing performance between different entities. You may be benchmarking your sales margins and cost base against your competitors, you may be benchmarking the grades achieved by pupils at different schools you are considering for your children’s future education. But making the performance comparisons is only one approach to benchmarking. The other main approach is called Business Process Benchmarking, where you compare the processes for achieving the results as well as the results themselves.

In the next two sections I will talk about benchmarking in general. In the following section I will discuss Business Process Benchmarking.

Benchmarking can be an expensive exercise, but remember these days there are increasingly people already doing this. Which?, J D Power, Trip Advisor are at the information end, but GoCompare, Money Supermarket and Compare the Market are increasingly being used in the buying process. If you are not using these services to make comparisons, I can ensure you that your customers are.

If you are serious, you may have your own internal benchmarking department. For example, ICI created an internal consulting team to benchmark the performance of their chemical plants against each other. Just making the comparisons, raised questions and kick started the performance improvement process.

In the list below I have given some examples of benchmarking. However, your trade association may have more specific and relevant benchmarks for you to use.

Financial benchmarking – for example:

  • PE ratios against industry sector
  • Dividend cover

Financial performance – for example:

  • Return on capital employed
  • Sales margin
  • Marketing spend as a percentage sales
  • R&D spend as a percentage sales
  • Turnover per employee
  • Value Added per employee
  • Sales per employee
  • Profit per employee

Service benchmarking – for example:

  • HR costs as percentage of sales
  • Financial and accounting costs as a percentage of sales
  • HR function FTEs/Total FTEs employed
  • Finance and accounting function FTEs/Total FTEs employed
  • HQ function FTEs/Total FTEs employed
  • Accounting costs per transaction

Product benchmarking, comparing the different elements of your products performance against the competition in the market place. You can do this yourself, or use industry standards such as those delivered by Which?, The Good Housekeeping Institute or J D Power.

Service benchmarking, again you can do this yourself through mystery shopping, or you can use the results from services such as Trip Advisor.

Customer satisfaction benchmarking – achieved by surveying your customers and your competitors customers.

Brand – companies such as Superbrands UK do this

Employee satisfaction benchmarking, there are consortium that will compare your employee satisfaction survey results with the rest of your industry or employees at large.

Employer brand – how attractive is your company to potential new employees?

Business Process Benchmarking was popularised by Robert Camp in the late 1980s after working for Xerox. He defined it as:

"The continuous process of measuring products, services and practices against the company’s toughest competitors or those companies renowned as industry leaders."

Other companies have taken this approach, but describe Business Process Benchmarking slightly differently. Ford described it as “a structured approach for learning from others and applying that knowledge”. 3M as “a tool to search for enablers that allow a company to perform at best-in-class level in business processes”.

In Business Process Benchmarking, the idea is to compare your process against the process others are using, so you understand both the level of performance that is being achieved and how it is achieved. An approach could take the following 5 steps.

Step 1 Pre-project planning

  • Obtaining project sponsorship
  • Put it in context, give the project meaning
  • Determining what is to be achieved (the outcomes)
  • Managing the expectations

Step 2 Planning

  • What is to be benchmarked?
  • Building the team
  • Formalising the goals
  • Identifying potential benchmarking partners (those companies you would like to go and see)

Step 3 Analysis

  • Establish current levels of performance
  • Map the existing process
  • Benchmark, by mapping competitor processes and comparing performance
  • Identify the potential gap

Step 4 Integration

  • Add the context and learning into your thinking
  • Project future performance to be achieved
  • Communicate your findings
  • Establish the commitment to action your findings

Step 5 Action

  • Establish the specific changes to be made
  • Plan the implementation
  • Monitor progress, plans and KPIs
  • Recalibrate your benchmark performance

The choice of who to benchmark against is important. You can:

  • Internally benchmark, compare the different operations within your business.
  • Industry benchmark, go and look at a good operator in your industry, but getting entry may be difficult unless they don’t compete directly for reason of distance etc. This type of benchmarking will give you the quickest results and insights, but may not give you breakthrough ideas.
  • Best in breed benchmarking by looking at who would be the best in the world at this aspect of the business as it is absolutely core to their operation. So the example here was Xerox who thought about “picking order” as a process and went and benchmarking a camping gear manufacturer, somebody who does picking very well but not in their industry at all. These visits can give you the best ideas if you are open minded about what you are looking at.

There are three compelling reasons to undertake Business Process Benchmarking:

  • The results will challenge excepted norms
  • It drives improvement by forcing you to look at best in class
  • It shows how performance can be delivered, so helps to overcome resistance to change.

External comparisons are always useful.

People in your business invariably think they perform better than they actually do so that external comparison can dismiss the myths.

If you Business Process Benchmark you can actually see how it is done.

You need to pick tough competitors not people you know aren’t as good as you are.

You need to make comparisons carefully. If you compete on your service levels, don’t be surprised if some people are lower cost. Also make sure that the comparisons are structurally the same, who owns the assets, are they in the balance sheet or are those assets hidden elsewhere?

Start with processes that are easy to benchmark to get some quick wins under your belt, but once you have done this, benchmark your core processes. It is good to know that your accounting costs are competitive, but in reality, there are much more important customer facing processes that should be got right first.

  • Robert Camp (2006), Benchmarking: the search for industry best practice that leads to superior performance, ASQC Quality Press, Milwaukee, USA.
  • Finance & Management Faculty special report
  • Employee Engagement
  • Super Brands - An insight into Britain's strongest brands
  • NHS Benchmarking network
  • Local Government Benchmarking Framework

The ICAEW Library & Information Service provides full text access to leading business, finance and management journals and eBooks. Further reading on benchmarking is available through the resources below.

  • 01 Jan 2023
  • Dave Harutian
  • Strategic Finance
  • 04 Jan 2021
  • C.Brooke Dobni, Mark Klassen
  • Journal of Business Strategy
  • 01 Jan 2017
  • Russ Banham
  • 01 Mar 2011
  • Brian H. Kleiner, Deven Shah
  • Industrial Management

You are permitted to access articles subject to the terms of use set by our suppliers and any restrictions imposed by individual publishers. Please see individual supplier pages  for full terms of use.

Terms of use: You are permitted to access, download, copy, or print out content from eBooks for your own research or study only, subject to the terms of use set by our suppliers and any restrictions imposed by individual publishers. Please see individual supplier pages for full terms of use.

More support on business

Read our articles, eBooks, reports and guides on Business Performance Management

Benchmarking reports: Compare companies within your chosen industry or region

Can't find what you're looking for?

The ICAEW Library can give you the right information from trustworthy, professional sources that aren't freely available online. Contact us for expert help with your enquiries and research.

  • Live web chat
  • [email protected]
  • +44 (0)20 7920 8620
  • Update History 02 Dec 2015 (12: 00 AM GMT) First published 18 May 2023 (12: 00 AM BST) Page updated with Further reading section, adding further reading on benchmarking. These new articles provide fresh insights, case studies and perspectives on this topic. Please note that the original article from 2015 has not undergone any review or updates.

Read out this code to the operator.

Advisory boards aren’t only for executives. Join the LogRocket Content Advisory Board today →

LogRocket blog logo

  • Product Management
  • Solve User-Reported Issues
  • Find Issues Faster
  • Optimize Conversion and Adoption

Conducting a UX benchmarking study step by step

case study benchmark analysis

UX benchmarking is the process of evaluating the performance of a company or website based on criteria accepted by stakeholders or universal standards. The process is usually generalized to UX benchmarking research, a type of summative evaluation, that deals with the analysis of relative performance within a company to measure its success.

UX Benchmarking

If you’re here, you’re probably looking for help conducting your own study. I will explain UX benchmarking in detail to understand when a company needs it and break down this complex process, outlining the steps needed for its successful adaptation.

Table of contents

Benchmarking research, what does benchmarking mean in a ux context, when is benchmarking used in ux, step 0: necessary ux data to start benchmarking, step 1: identify behavioral and attitudinal issues using a ux audit, step 2: define standards you will look up to.

  • Step 3: Analyse comparative results and propose solutions

Step 4: Implement changes

Step 5: gather post-change results on the ux metrics of interest, step 6: visualize and report the findings, step 7: record lessons learned, step 8: restart the cycle.

Benchmarking research focuses on these points:

  • Applying specific data you gather from a comparative analysis to the standard
  • This data’s usability for your specific website and product
  • Its relevance considering the constantly changing environment
  • Your target audience
  • Your technical abilities to deliver what is needed based on this data
  • Your company’s goals

Generally, benchmarking focuses on comparative evaluation to reasonable standards. I’ve used the word “reasonable” on purpose because the company might deal with data that has no universal standards but can identify patterns and trends within the industry to use as a guide for evaluation.

UX benchmarking research is a complex task that you should address from various perspectives.

In the UX context, we evaluate the company’s or product’s performance based on qualitative and quantitative data. Considering the fact that qualitative data is subjective, we’d need to investigate any behavioral patterns we observe with the company’s end goals in mind. We do this to conclude the success or failure of the company or changes within it.

Benchmarking is necessary when:

  • Return on investment (ROI) is below the stakeholders’ target
  • UX metrics are indicating a poor user experience
  • Measuring the company’s or UX team’s performance after specific changes in the product
  • Market shifts changed the nature of competition
  • A new paradigm appeared in industry standards due to innovations, law changes, average industry performance, etc.

In other words, if the company wants to display its strength or understand its weaknesses, opportunities, and threats, it should conduct UX benchmarking.

Benchmarking is always a part of a bigger process for a company’s performance optimization and improvement. Businesses operate in a volatile environment with innovations and new competitors continuously appearing on the market; therefore, a developing company will always have an active team of UX specialists that work full-time to update the company’s design and keep track of customers’ shifting behavioral patterns.

UX benchmarking study: A step-by-step guide

The complexity of the UX benchmarking process should not discourage you from conducting it on a regular basis. The larger company is, the harder it is to spot weak spots because more variables are involved in a company’s performance.

However, the first step toward better UX performance relies on extensive background research on the qualitative and quantitative variables involved.

For a successful start, the UX team should identify what they are dealing with by gathering all relevant UX data available for your company. You can break this down into qualitative and quantitative:

  • Time-on-task — average time of the user’s involvement with the website, completing the particular task
  • Conversion rates — the percentage of conversions compared to the total number of website visitors
  • Error rate — misled users or technical barriers that led to website interaction failure
  • Completion rate — the percentage of completed tasks by users to the total number of attempts
  • Engagement — set of actions made by the user on a site (views, clicks, shares, etc.)
  • Retention — the percentage of returning users after the first interaction with the company
  • Customer satisfaction score (CSAT) — a loyalty metric based on the level of user satisfaction
  • System usability scale (SUS) — customer’s evaluation of the usability of the website
  • Net promotor score (NPS) — users’ readiness to refer others to share the experience they got from the company
  • Customer effort score (CES) — measures the amount of effort users had to put in while interacting with the product
  • Any other behavioral data that is not measured in numbers but can be used in the UX context

Gathering all the data is generally called a UX audit and is not limited to the variables I’ve mentioned. All of the measurements are a part of understanding the bigger picture: what the audience experiences when they interact with the website. You can further interpret the raw data to spot issues and threats and create opportunities for growth by targeting the weak spots.

How can professionals gather data?

The quantitative data I’ve mentioned is usually measured by professionals using tools like Google Analytics or LogRocket . Such tools provide statistics on everything when it comes to the behavior of the users on the website, as well as a user behavioral roadmap. The roadmap means that tools display the exact movement of the customers on the website.

Behavioral Roadmap

You should support some data further with statistics on the company’s sales; the total number of clients; the percentage of clients ordering compared to all users coming to the payment page; the number of customers ordering multiple times compared to one-time users, etc.

case study benchmark analysis

Over 200k developers and product managers use LogRocket to create better digital experiences

case study benchmark analysis

The only weakness of such data is that it does not show the reason why customers leave specific pages, spend less time on different pages, do not convert, and so on. Answering those questions is the task for UX and UI teams as they analyze the correlation between the numbers they’ve acquired.

The qualitative data mentioned above is always measured by directly contacting the clients using questionnaires or user interviews . While CSAT, SUS, NPS, and CES are usually measured with questionnaire forms, interviews are meant to gather some behavioral data that is deeper and more detailed in explanation. For example, customer feedback forms can clarify if the users like the service or not and give it a score from 1 to 5. Interviews, in such cases, will help you understand exactly why they like the service or not and see at which step of the customer journey users could have an unsatisfactory experience.

Using the data gathered during the UX audit mentioned above, UX professionals should work on finding patterns that explain the customers’ behavior and discovering ways to improve their experience. Possible behavioral and attitudinal patterns that may become a threat to the company’s performance can be summed up as follows:

  • Technical problems within the website
  • The interpreted quantitative data with deeper analysis if conversions, satisfaction rate, traffic, and retention are below benchmarking standards
  • Outdated materials
  • Problems with accessibility and usability
  • Building a roadmap of the customers’ actions might reveal barriers that could impact their choices
  • Qualitative data gathered through questionnaires and interviews could provoke concerns and identify weaknesses

Hence, the results of the UX audit may be more specific but usually fall under some of the categories identified. Your UX team should outline all your findings before taking your next steps. Such an outline serves as a basis for the next benchmarking step.

By definition, the benchmarking process relies on a comparison to predefined standards. However, there might not be universal standards that everyone at your company follow, which means that the company has to find reliable standards on its own. There are four types of standards that you can use for UX benchmarking:

  • Product performance data — qualitative and quantitative UX metrics that the company had before launching the new product or implementing changes. For example, before the latest changes to the product, the satisfaction rate was 3.5/5, while after the update, the rate increased to 3.75/5
  • Competitors’ performance — available industry best practices and their results. For example, your current conversion rate is 1 percent, while your main competitor has 5 percent
  • Company’s goals — standards set by stakeholders. For example, your average retention rate is 10 percent, while investors want to achieve 20 percent
  • Industry performance — average rates between a set number of competitors. For example, your SUS rate is 2.7/5, while the average rate among the ten main competitors is 4.1/5

Step 3: Analyze comparative results and propose solutions

After identifying the standards, the next step is to compare your current UX metrics and find a solution to improve your performance. Depending on the standard, try asking yourself these questions:

  • Are there technical issues that lead to problems?
  • Did the change in product lead to problems?
  • Did the product change solve the issue it was meant to tackle?
  • What can be done to fix the underperformance/perform even better?
  • What do they do better, and how?
  • How is their input data different (for example, target audience)?
  • What can be done to reach their results/increase the gap between your results if your results are better?
  • What is our target, and why?
  • Is this achievable based on competitors’ performance?
  • Will changes lead to higher returns?
  • What can be done to satisfy stakeholders?
  • Is the data acquired reliable?
  • What patterns can be found in the performance of others that led to better results?
  • What can be done to enhance your website?

Answering why others are performing better or worse than your business is key to understanding what can be done for the company’s development. If standards are lower than your metrics, work on increasing the gap between results and aim for perfection. If others are doing better — analyze why that happened, spot the weaknesses to address, and find opportunities to outperform them.

Your company must always test new things because the market is shifting fast, and staying updated is instrumental to becoming or staying number one in the industry.

Positive changes in results are inevitable if the benchmarking is done correctly. Your proposed solutions will benefit the company after implementation, but only if the design team did the cause-effect analysis and user testing correctly.

To confirm whether the actions taken performed well, conduct a post-change UX audit and find fluctuations in results. After the implementation process, the design team checks the primary UX metrics, tracks if they have improved, and compares them with benchmarking standards.

We do benchmarking research to achieve a particular purpose. The data acquired has to be structured and further analyzed; that’s why it’s important to visualize data using charts and infographics to appeal to stakeholders. Such practice is especially relevant if your standards are based on stakeholders’ expectations. The final report must contain the following:

  • Achieved vs. previous results vs. set standards
  • Evaluation of the underperforming results (if there are any) based on post-implementation data
  • Additional adjustments to achieve better results in the future
  • References and data proofs that stakeholders can check to confirm the presented data

Customer Satisfaction Graph

Any outcome based on proper benchmarking is a valuable lesson. Analyzing your cases and saving them for future reference is key to successful performance.

If a change did not provide the desired results, the case might prevent similar mistakes in the future. If the project is successful, the company may elaborate on the findings and investigate possible ways to benefit even more.

Benchmarking is not only about measuring success, because negative outcomes can also be used to find opportunities in other areas and prevent future overspending on failing projects.

There is no limit to perfection. Considering the ever-changing environment and even the fact that a customer profile, as well as user behavior, changes over time, it is always important to target evolving needs. Therefore, the UX design team should always work on new solutions to increase user satisfaction, react to customers’ feedback, and keep track of their behavior to update the company based on shifted data.

LogRocket : Analytics that give you UX insights without the need for interviews

LogRocket lets you replay users' product experiences to visualize struggle, see issues affecting adoption, and combine qualitative and quantitative data so you can create amazing digital experiences.

See how design choices, interactions, and issues affect your users — get a demo of LogRocket today .

Share this:

  • Click to share on Twitter (Opens in new window)
  • Click to share on Reddit (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • #ux research

case study benchmark analysis

Stop guessing about your digital experience with LogRocket

Recent posts:.

Hourglass Icon Over Clockface

The past and present of the skeleton screen — and how to use them

You can use a skeleton screen to create a more engaging loading experience — it gives users a visual indication of progress.

case study benchmark analysis

10 Figma accessibility plugins that make up for the lack of inbuilt options

Check out ten Figma accessibility plugins that make creating accessible designs easy — and learn why accessibility matters so much.

case study benchmark analysis

What you need to know about being on a UX team of one

Working on user experience can be difficult, even in a team — this is doubly true when you have to work through UX challenges alone.

case study benchmark analysis

Introducing Spectrum 2: Adobe’s revamped design system

Let’s journey back to Adobe Spectrum’s pioneering days and delve into the enhancements that Spectrum 2 brings to the table.

case study benchmark analysis

Leave a Reply Cancel reply

  • Open access
  • Published: 19 February 2024

Sustaining the collaborative chronic care model in outpatient mental health: a matrixed multiple case study

  • Bo Kim 1 , 2 ,
  • Jennifer L. Sullivan 3 , 4 ,
  • Madisen E. Brown 1 ,
  • Samantha L. Connolly 1 , 2 ,
  • Elizabeth G. Spitzer 1 , 5 ,
  • Hannah M. Bailey 1 ,
  • Lauren M. Sippel 6 , 7 ,
  • Kendra Weaver 8 &
  • Christopher J. Miller 1 , 2  

Implementation Science volume  19 , Article number:  16 ( 2024 ) Cite this article

34 Accesses

1 Altmetric

Metrics details

Sustaining evidence-based practices (EBPs) is crucial to ensuring care quality and addressing health disparities. Approaches to identifying factors related to sustainability are critically needed. One such approach is Matrixed Multiple Case Study (MMCS), which identifies factors and their combinations that influence implementation. We applied MMCS to identify factors related to the sustainability of the evidence-based Collaborative Chronic Care Model (CCM) at nine Department of Veterans Affairs (VA) outpatient mental health clinics, 3–4 years after implementation support had concluded.

We conducted a directed content analysis of 30 provider interviews, using 6 CCM elements and 4 Integrated Promoting Action on Research Implementation in Health Services (i-PARIHS) domains as codes. Based on CCM code summaries, we designated each site as high/medium/low sustainability. We used i-PARIHS code summaries to identify relevant factors for each site, the extent of their presence, and the type of influence they had on sustainability (enabling/neutral/hindering/unclear). We organized these data into a sortable matrix and assessed sustainability-related cross-site trends.

CCM sustainability status was distributed among the sites, with three sites each being high, medium, and low. Twenty-five factors were identified from the i-PARIHS code summaries, of which 3 exhibited strong trends by sustainability status (relevant i-PARIHS domain in square brackets): “Collaborativeness/Teamwork [Recipients],” “Staff/Leadership turnover [Recipients],” and “Having a consistent/strong internal facilitator [Facilitation]” during and after active implementation. At most high-sustainability sites only, (i) “Having a knowledgeable/helpful external facilitator [Facilitation]” was variably present and enabled sustainability when present, while (ii) “Clarity about what CCM comprises [Innovation],” “Interdisciplinary coordination [Recipients],” and “Adequate clinic space for CCM team members [Context]” were somewhat or less present with mixed influences on sustainability.

Conclusions

MMCS revealed that CCM sustainability in VA outpatient mental health clinics may be related most strongly to provider collaboration, knowledge retention during staff/leadership transitions, and availability of skilled internal facilitators. These findings have informed a subsequent CCM implementation trial that prospectively examines whether enhancing the above-mentioned factors within implementation facilitation improves sustainability. MMCS is a systematic approach to multi-site examination that can be used to investigate sustainability-related factors applicable to other EBPs and across multiple contexts.

Peer Review reports

Contributions to the literature

We examined the ways in which the sustainability of the evidence-based Collaborative Chronic Care Model differed across nine outpatient mental health clinics where it was implemented.

This work demonstrates a unique application of the Matrixed Multiple Case Study (MMCS) method, originally developed to identify factors and their combinations that influence implementation, to investigate the long-term sustainability of a previously implemented evidence-based practice.

Contextual influences on sustainability identified through this work, as well as the systematic approach to multi-site examination offered by MMCS, can inform future efforts to sustainably implement and methodically evaluate an evidence-based practice’s uptake and continued use in routine care.

The sustainability of evidence-based practices (EBPs) over time is crucial to maximize the public health impact of EBPs implemented into routine care. Implementation evaluators focus on sustainability as a central implementation outcome, and funders of implementation efforts seek sustained long-term returns on their investment. Furthermore, practitioners and leadership at implementation sites face the task of sustaining an EBP’s usage even after implementation funding, support, and associated evaluation efforts conclude. The circumstances and influences contributing to EBP sustainability are therefore of high interest to the field of implementation science.

Sustainability depends on the specific EBP being implemented, the individuals undergoing the implementation, the contexts in which the implementation takes place, and the facilitation of (i.e., support for) the implementation. Hence, universal conditions that invariably lead to sustainability are challenging to establish. Even if a set of conditions could be identified as being associated with high sustainability “on average,” its usefulness is questionable when most real-world implementation contexts may deviate from “average” on key implementation-relevant metrics.

Thus, when seeking a better understanding of EBP sustainability, there is a critical need for methods that examine the ways in which sustainability varies in diverse contexts. One such method is Matrixed Multiple Case Study (MMCS) [ 1 ], which is beginning to be applied in implementation research to identify factors related to implementation [ 2 , 3 , 4 , 5 ]. MMCS capitalizes on the many contextual variations and heterogeneous outcomes that are expected when an EBP is implemented across multiple sites. Specifically, MMCS provides a formalized sequence of steps for cross-site analysis by arranging data into an array of matrices, which are sorted and filtered to test for expected factors and identify less expected factors influencing an implementation outcome of interest.

Although the MMCS represents a promising method for systematically exploring the “black box” of the ways in which implementation is more or less successful, it has not yet been applied to investigate the long-term sustainability of implemented EBPs. Therefore, we applied MMCS to identify factors related to the sustainability of the evidence-based Collaborative Chronic Care Model (CCM), previously implemented using implementation facilitation [ 6 , 7 , 8 ], at nine VA medical centers’ outpatient general mental health clinics. An earlier interview-based investigation of CCM provider perspectives had identified key determinants of CCM sustainability at the sites, yet characteristics related to the ways in which CCM sustainability differed at the sites are still not well understood. For this reason, our objective was to apply MMCS to examine the interview data to determine factors associated with CCM sustainability at each site.

Clinical and implementation contexts

CCM-based care aims to ensure that patients are treated in a coordinated, patient-centered, and anticipatory manner. This project’s nine outpatient general mental health clinics had participated in a hybrid CCM effectiveness-implementation trial 3 to 4 years prior, which had resulted in improved clinical outcomes that were not universally maintained post-implementation (i.e., after implementation funding and associated evaluation efforts concluded) [ 7 , 9 ]. This lack of aggregate sustainability across the nine clinics is what prompted the earlier interview-based investigation of CCM provider perspectives that identified key determinants of CCM sustainability at the trial sites [ 10 ].

These prior works were conducted in VA outpatient mental health teams, known as Behavioral Health Interdisciplinary Program (BHIP) teams. While there was variability in the exact composition of each BHIP team, all teams consisted of a multidisciplinary set of frontline clinicians (e.g., psychiatrists, psychologists, social workers, nurses) and support staff, serving a panel of about 1000 patients each.

This current project applied MMCS to examine the data from the earlier interviews [ 10 ] for the ways in which CCM sustainability differed at the sites and the factors related to sustainability. The project was determined to be non-research by the VA Boston Research and Development Service, and therefore did not require oversight by the Institutional Review Board (IRB). Details regarding the procedures undertaken for the completed hybrid CCM effectiveness-implementation trial, which serves as the context for this project, have been previously published [ 6 , 7 ]. Similarly, details regarding data collection for the follow-up provider interviews have also been previously published [ 10 ]. We provide a brief overview of the steps that we took for data collection and describe the steps that we took for applying MMCS to analyze the interview data. Additional file  1 outlines our use of the Consolidated Criteria for Reporting Qualitative Research (COREQ) Checklist [ 11 ].

Data collection

We recruited 30 outpatient mental health providers across the nine sites that had participated in the CCM implementation trial, including a multidisciplinary mix of mental health leaders and frontline staff. We recruited participants via email, and we obtained verbal informed consent from all participants. Each interview lasted between 30 and 60 min and focused on the degree to which the participant perceived care processes to have remained aligned to the CCM’s six core elements: work role redesign, patient self-management support, provider decision support, clinical information systems, linkages to community resources, and organizational/leadership support [ 12 , 13 , 14 ]. Interview questions also inquired about the participant’s perceived barriers and enablers influencing CCM sustainability, as well as about the latest status of CCM-based care practices. Interviews were digitally recorded and professionally transcribed. Additional details regarding data collection have been previously published [ 10 ].

Data analysis

We applied MMCS’ nine analytical steps [ 1 ] to the interview data. Each step described below was led by one designated member of the project team, with subsequent review by all project team members to reach a consensus on the examination conducted for each step.

We established the evaluation goal (step 1) to identify the ways in which sustainability differed across the sites and the factors related to sustainability, defining sustainability (step 2) as the continued existence of CCM-aligned care practices—namely, that care processes remained aligned with the six core CCM elements. Table  1 shows examples of care processes that align with each CCM element. As our prior works directly leading up to this project (i.e., design and evaluation of the CCM implementation trial that involved the very sites included in this project [ 6 , 15 , 16 ]) were guided by the Integrated Promoting Action on Research Implementation in Health Services (i-PARIHS) framework [ 17 ] and i-PARIHS positions facilitation (the implementation strategy that our trial was testing) as the core ingredient that drives implementation [ 17 ], we selected i-PARIHS’ four domains—innovation, recipients, context, and facilitation—as relevant domains under which to examine factors influencing sustainability (step 3). i-PARIHS posits that the successful implementation of an innovation and its sustained use by recipients in a context is enabled by facilitation (both the individuals doing the facilitation and the process used for facilitation). We examined the data on both sustainability and potentially relevant i-PARIHS domains (step 4) by conducting directed content analysis [ 18 ] of the recorded and professionally transcribed interview data. We used the six CCM elements and the four i-PARIHS domains as a priori codes.

Additional file  2 provides an overview of data input, tasks performed, and analysis output for MMCS steps 5 through 9 described below. We assessed sustainability per site (step 5) by generating CCM code summaries per site, and reached a consensus on whether each site exhibited high, medium, or low sustainability relative to other sites based on the summary data. We assigned a higher sustainability level for sites that exhibited more CCM-aligned care processes, had more participants consistently mention those processes, and considered those processes more as “just the way things are done” at the site. Namely, (i) high sustainability sites had concrete examples of CCM-aligned care processes (such as the ones shown in Table  1 ) for many of the six CCM elements, which multiple participants mentioned as central to how they deliver care, (ii) low sustainability sites had only a few concrete examples of CCM-aligned care processes, mentioned by only a small subset of participants and/or inconsistently practiced, and (iii) medium sustainability sites matched neither of the high nor low sustainability cases, having several concrete examples of CCM-aligned care process for some of the CCM elements, varying in whether they are mentioned by multiple participants or how consistently they are a part of delivering care. For the CCM code summaries per site, one project team member initially reviewed the coded data to draft the summaries including exemplar quotes. Each summary and relevant exemplar quotes were then reviewed by and refined with input from all six project team members during recurring team meetings to finalize the high, medium, or low sustainability designation to use in the subsequent MMCS steps. Reviewing and refining the summaries for the nine sites took approximately four 60-min meetings of the six project team members, with each site’s CCM code summary taking approximately 20–35 min to discuss and reach consensus on. We referred to lists of specific examples of how the six core CCM elements were operationalized in our CCM implementation trial [ 19 , 20 ]. Refinements occurred mostly around familiarizing the newer members of the project team (i.e., those who had not participated in our prior CCM-related work) with the examples and definitions. We aligned to established qualitative analysis methods for consensus-reaching discussions [ 18 , 21 ]. Recognizing the common challenge faced by such discussions in adequately accounting for everyone’s interpretations of the data [ 22 ], we drew on Bens’ meeting facilitation techniques [ 23 ] that include setting ground rules, ensuring balanced participation from all project team members, and accurately recording decisions and action items.

We then identified influencing factors per site (step 6), by generating i-PARIHS code summaries per site and identifying distinct factors under each domain of i-PARIHS (e.g., Collaborativeness and teamwork as a factor under the Recipients domain). For the i-PARIHS code summaries per site, one project team member initially reviewed the coded data to draft the summaries including exemplar quotes. They elaborated on each i-PARIHS domain-specific summary by noting distinct factors that they deemed relevant to the summary, proposing descriptive wording to refer to each factor (e.g., “team members share a commitment to their patients” under the Recipients domain). Each summary, associated factor descriptions, and relevant exemplar quotes were then reviewed and refined with input from all six project team members during recurring team meetings to finalize the relevant factors to use in the subsequent MMCS steps. Finalizing the factors included deciding which similar proposed factor descriptions from different sites to consolidate into one factor and which wording to use to refer to the consolidated factor (e.g., “team members share a commitment to their patients,” “team members collaborate well,” and “team members know each other’s styles and what to expect” were consolidated into the Collaborativeness and teamwork factor under the Recipients domain). It took approximately four 60-min meetings of the six project team members to review and refine the summaries and factors for the nine sites, with each site’s i-PARIHS code summary and factors taking approximately 20–35 min to discuss and reach consensus on. We referred to lists of explicit definitions of i-PARIHS constructs that our team members had previously developed and published [ 16 , 24 ]. We once again aligned to established qualitative analysis methods for consensus-reaching discussions [ 18 , 21 ], drawing on Bens’ meeting facilitation techniques [ 23 ] to adequately account for everyone’s interpretations of the data [ 22 ].

We organized the examined data (i.e., the assessed sustainability and identified factors per site) into a sortable matrix (step 7) using Microsoft Excel [ 25 ], laid out by influencing factor (row), sustainability (column), and site (sheet). We conducted within-site analysis of the matrixed data (step 8), examining the data on each influencing factor and designating whether the factor (i) was present, somewhat present, or minimally present [based on aggregate reports from the site’s participants; used “minimally present” when, considering all available data from a site regarding a factor, the factor was predominantly weak (e.g., predominantly weak Ability to continue patient care during COVID at a medium sustainability site); used “somewhat present” when, considering all available data from a site regarding a factor, the factor was neither predominantly strong nor predominantly weak (e.g., neither predominantly strong nor predominantly weak Collaborativeness and teamwork at a low sustainability site)], and (ii) had an enabling, hindering, or neutral/unclear influence on sustainability (designated as “neutral” when, considering all available data from a site regarding a factor, the factor had neither a predominantly enabling nor a predominantly hindering influence on sustainability). These designations of factors’ presence and influence are conceptually representative of what is commonly referred to as magnitude and valence, respectively, by other efforts that construct scoring for qualitative data (e.g., [ 26 , 27 ]). Like the team-based consensus approach of earlier MMCS steps, factors’ presence and type of influence per site were initially proposed by one project team member after reviewing the matrix’s site-specific data, then refined with input from all project team members during recurring team meetings that reviewed the matrix. Accordingly, similar to the earlier MMCS steps, we aligned to established qualitative methods [ 18 , 21 ] and meeting facilitation techniques [ 23 ] for these consensus-reaching discussions.

We then conducted a cross-site analysis of the matrixed data (step 9), assessing whether factors and their combinations were (i) present across multiple sites, (ii) consistently associated with higher or lower sustainability, and (iii) emphasized at some sites more than others. We noted that any factor may have not come up during interviews with a site because either it is not pertinent or it is pertinent but still did not come up, although we asked an open-ended question at the end of each interview about whether there was anything else that the participant wanted to share regarding sustainability. To adequately account for these possibilities, we decided as a team to regard a factor or a combination of factors as being associated with high/medium/low sustainability if it was identified at a majority (i.e., even if not all) of the sites designated as high/medium/low sustainability (e.g., if the Collaborativeness and teamwork factor is identified at a majority, even if not all, of the high sustainability sites, we would find it to be associated with high sustainability). Like the team-based consensus approach of earlier MMCS steps, cross-site patterns were initially proposed by one project team member after reviewing the matrix’s cross-site data, then refined with input from all project team members during recurring team meetings that reviewed the matrix. Accordingly, similar to the earlier MMCS steps, we aligned to established qualitative methods [ 18 , 21 ] and meeting facilitation techniques [ 23 ] for these consensus-reaching discussions. We acknowledged the potential existence of additional factors influencing sustainability that may not have emerged during our interviews and also may vary substantially between sites. For example, adaptation of the CCM, characteristics of the patient population, and availability of continued funding, which are factors that extant literature reports as being relevant to sustainability [ 28 , 29 ], were not seen in our interview data. To maintain our analytic focus on the factors seen in our data, we did not add these factors to our analysis.

For the nine sites included in this project, we found the degree of CCM sustainability to be split evenly across the sites—three high-, three medium-, and three low-sustainability. Twenty-five total influencing factors were identified under the i-PARIHS domains of Innovation (6), Recipients (6), Context (8), and Facilitation (5). Table  2 shows these identified influencing factors by domain. Figure  1 shows 11 influencing factors that were identified for at least two sites within a group of high/medium/low sustainability sites—e.g., the factor “consistent and strong internal facilitator” is shown as being present at high sustainability sites with an enabling influence on sustainability, because it was identified as such at two or more of the high sustainability sites. Of these 11 influencing factors, four were identified only for sites with high CCM sustainability and two were identified only for sites with medium or low CCM sustainability.

figure 1

Influencing factors that were identified for at least two sites within a group of high/medium/low sustainability sites

Key trends in influencing factors associated with high, medium, and/or low CCM sustainability

Three factors across two i-PARIHS domains exhibited strong trends by sustainability status. They were the Collaborativeness and teamwork and Turnover of clinic staff and leadership factors under the Recipients domain, and the Having a consistent and strong internal facilitator factor under the Facilitation domain.

Recipients-related factors

Collaborativeness and teamwork was present with an enabling influence on CCM sustainability at most high and medium sustainability sites, while it was only somewhat present with a neutral influence on CCM sustainability at most low sustainability sites. When asked what had made their BHIP team work well, a participant from a high sustainability site said,

“Just a collaborative spirit.” (Participant 604)

A participant from a medium sustainability site said,

“We joke that [the BHIP teams] are even family, that the teams really do function pretty tightly and they each have their own personality.” (Participant 201)

At the low sustainability sites, willingness to work as a team varied across team members; a participant from a low sustainability site said,

“… I think it has to be the commitment of the people who are on the team. So those that are regularly attending, we get a lot more out of it than those that probably don't ever come [to team meetings].” (Participant 904)

Collaborativeness and teamwork of BHIP team members were often perceived as the highlight of pursuing interdisciplinary care.

Turnover of clinic staff and leadership was present with a hindering influence on CCM sustainability at most high, medium, and low sustainability sites.

“We’ve lost a lot of really, really good providers here in the time I’ve been here …,” (Participant 102)

said a participant from a low-sustainability site that had to reconfigure its BHIP teams due to clinic staff shortages. Turnover of mental health clinic leadership made it difficult to maintain CCM practices, especially beyond the teams that participated in the original CCM implementation trial. A participant from a medium sustainability site said,

“Probably about 90 percent of the things that we came up with have fallen by the wayside. Within our team, many of those remain but again, that hand off towards the other teams that I think partly is due to the turnover rate with program managers, supervisors, didn’t get fully implemented.” (Participant 703)

Although turnover was an issue for high sustainability sites as well, there was also indication of the situation improving in recent years; a participant from a high sustainability site said,

“… our attrition rollover rate has dropped quite a bit and I would really attribute that to [the CCM being] more functional and more sustainable and tolerable for the providers.” (Participant 502)

As such, staff and leadership turnover was deemed a major challenge for CCM sustainability for all sites regardless of the overall level of sustainability.

Facilitation-related factor

Having a consistent and strong internal facilitator was present with an enabling influence on CCM sustainability at high sustainability sites, not identified as an influencing factor at most of the medium sustainability sites, and variably present with a hindering, neutral, or unclear influence on CCM sustainability at low sustainability sites. Participants from a high sustainability site perceived that it was important for the internal facilitator to understand different BHIP team members’ personalities and know the clinic’s history. A participant from another high sustainability site shared that, as an internal facilitator themselves, they focused on recognizing and reinforcing the progress of team members:

“… I'm often the person who kind of [starts] off with, ‘Hey, look at what we've done in this location,’ ‘Hey look at what the team's done this month.’” (Participant 402)

A participant from a low sustainability site had also served as an internal facilitator and recounted the difficulty and importance of readying the BHIP team to function in the long run without their assistance:

“I should have been able to get out sooner, I think, to get it to have them running this themselves. And that was just a really difficult process.” (Participant 301)

Participants, especially from the high and low sustainability sites, attributed their BHIP teams’ successes and challenges to the skills of the internal facilitator.

Influencing factors identified only for sites with high CCM sustainability

Four factors across four i-PARIHS domains were identified for high sustainability sites and not for medium or low sustainability sites. They were the factors Details about the CCM being well understood (Innovation domain), Interdisciplinary coordination (Recipients domain), Having adequate clinic space for CCM team members (Context domain), and Having a knowledgeable and helpful external facilitator (Facilitation domain).

Innovation-related factor

Details about the CCM being well understood was minimal to somewhat present with an unclear influence on CCM sustainability.

“We’ve … been trying to help our providers see the benefit of team-based care and the episodes-of-care idea, and I would say that is something our folks really have continued to struggle with as well,” (Participant 401)

said a participant from a high sustainability site. “What is considered CCM-based care?” continued to be a question on providers’ minds. A participant from a high sustainability site asked during the interview,

“Is there kind of a clearing house of some of the best practices for [CCM] that you guys have … or some other collection of resources that we could draw from?” (Participant 601)

Although such references are indeed accessible online organization-wide, participants were not always aware of those resources or what exactly CCM entails.

Recipients-related factor

Interdisciplinary coordination was somewhat present with a hindering, neutral, or unclear influence on CCM sustainability. Coordination between psychotherapy and psychiatry providers was deemed difficult by participants from high-sustainability sites. A participant said,

“We were initially kind of top heavy on the psychiatry so just making sure we have … therapy staff balancing that out [has been important].” (Participant 501)

Another participant perceived that BHIP teams were helpful in managing.

… ‘sibling rivalry’ between different disciplines … because [CCM] puts us all in one team and we communicate.” (Participant 505)

Interdisciplinary coordination was understood by the participants as being necessary for effective CCM-based care yet difficult to achieve.

Context-related factor

Having adequate clinic space for CCM team members was minimal to somewhat present with a hindering, neutral, or unclear influence on CCM sustainability. COVID-19 led to changes in how clinic space was used/assigned. A participant from a high sustainability site remarked,

“Pre-COVID everything was in a room instead of online. And now all our meetings are online and so it's actually really easy for the supervisors to be able to rotate through them and then, you know, they can answer programmatic questions ….” (Participant 402)

Participants from another high sustainability site found that issues regarding limited clinic space were both exacerbated and alleviated by COVID, with the mental health service losing space to vaccine clinics but more mental health clinicians teleworking and in less need of clinic space. Virtual connections were seen to alleviate some physical workspace-related concerns.

Having a knowledgeable and helpful external facilitator was variably present; when present, it had an enabling influence on CCM sustainability. Participants from a high sustainability site noted how many of the external facilitator’s efforts to change the BHIP team’s work processes very much remained over time. An example of a change was to have team meetings be structured to meet evolving patient needs. Team members came to meetings with the shared knowledge and expectation that,

“… we need to touch on folks who are coming out of the hospital, we need to touch on folks with higher acuity needs.” (Participant 402)

Implementation support that sites received from their external facilitator mostly occurred during the time period of the original CCM implementation trial; correspondence with the external facilitator after that trial time period was not common for sites. Participants still largely found the external facilitator to provide helpful guidance and advice on delivering CCM-based care.

Influencing factors identified only for sites with medium or low CCM sustainability

Two factors were identified for medium or low sustainability sites and not for high sustainability sites. They were the factors Ability to continue patient care during COVID and Adequate resources/capacity for care delivery . These factors were both under i-PARIHS’ Context domain, unlike the influencing factors above that were identified only for high sustainability sites, which spanned all four i-PARIHS domains.

Context-related factors

Ability to continue patient care during COVID had a hindering influence on CCM sustainability when minimally present. Participants felt that their CCM work was challenged when delivering care through telehealth was made difficult—e.g., at a medium sustainability site, site policies during the pandemic required a higher number of in-person services than the BHIP team providers expected or desired to deliver. On the other hand, this factor had an enabling influence on CCM sustainability when present. A participant at a low sustainability site mentioned the effect of telehealth on being able to follow up more easily with patients who did not show up for their appointments:

“… my no-show rate has dropped dramatically because if people don’t log on after a couple minutes, I call them. They're like ‘oh, I forgot, let me pop right on,’ whereas, you know, in the face-to-face space, you know, you wait 15 minutes, you call them, it’s too late for them to come in so then they're no shows.” (Participant 102)

The advantages of virtual care delivery, as well as the challenges of getting approvals to pursue it to varying extents, were well recognized by the participants.

Adequate resources/capacity for care delivery was minimally present at medium sustainability sites with a hindering influence on CCM sustainability. At a medium sustainability site, although leadership was supportive of CCM, resources were being used to keep clinics operational (especially during COVID) rather than investing in building new CCM-based care delivery processes.

“I think that if my boss came to me, [and asked] what could I do for [the clinics] … I would say even more staff,” (Participant 202)

said a participant from a medium sustainability site. At the same time, the participant, as many others we interviewed, understood and emphasized the need for BHIP teams to proceed with care delivery even when resources were limited:

“… when you’re already dealing with a very busy clinic, short staff and then you’re hit with a pandemic you handle it the best that you can.” (Participant 202)

Participants felt the need for basic resource requirements to be met in order for CCM-based care to be feasible.

In this project, we examined factors influencing the sustainability of CCM-aligned care practices at general mental health clinics within nine VA medical centers that previously participated in a CCM implementation trial. Guided by the core CCM elements and i-PARIHS domains, we conducted and analyzed CCM provider interviews. Using MMCS, we found CCM sustainability to be split evenly across the nine sites (three high, three medium, and three low), and that sustainability may be related most strongly to provider collaboration, knowledge retention during staff/leadership transitions, and availability of skilled internal facilitators.

In comparison to most high sustainability sites, participants from most medium or low sustainability sites did not mention a knowledgeable and helpful external facilitator who enabled sustainability. Participants at the high sustainability sites also emphasized the need for clarity about what CCM-based care comprises, interdisciplinary coordination in delivering CCM-aligned care, and adequate clinic space for BHIP team members to connect and collaborate. In contrast, in comparison to participants at most high sustainability sites, participants at most medium or low sustainability sites emphasized the need for better continuity of patient-facing activities during the COVID-19 pandemic and more resources/capacity for care delivery. A notable difference between these two groups of influencing factors is that the ones emphasized at most high sustainability sites are more CCM-specific (e.g., external facilitator with CCM expertise, knowledge, and structures to support delivery of CCM-aligned care), while the ones emphasized at most medium or low sustainability sites are factors that certainly relate to CCM sustainability but are focused on care delivery operations beyond CCM-aligned care (e.g., COVID’s widespread impacts, limited staff availability). In short, an emphasis on immediate, short-term clinical needs in the face of the COVID-19 pandemic and staffing challenges appeared to sap sites’ enthusiasm for sustaining more collaborative, CCM-consistent care processes.

Our previous qualitative analysis of these interview data suggested that in order to achieve sustainability, it is important to establish appropriate infrastructure, organizational readiness, and mental health service- or department-wide coordination for CCM implementation [ 10 ]. The findings from the current project augment these previous findings by highlighting the specific factors associated with higher and lower CCM sustainability across the project sites. This additional knowledge provides two important insights into what CCM implementation efforts should prioritize with regard to the previously recommended appropriate infrastructure, readiness, and coordination. First, for knowledge retention and coordination during personnel changes (including any changes in internal facilitators through and following implementation), care processes and their specific procedures should be established and documented in order to bring new personnel up to speed on those care processes. Management sciences, as applied to health care and other fields, suggest that such organizational knowledge retention can be maximized when there are (i) structures set up to formally recognize/praise staff when they share key knowledge, (ii) succession plans to be applied in the event of staff turnover, (iii) opportunities for mentoring and shadowing, and (iv) after action reviews of conducted care processes, which allow staff to learn about and shape the processes themselves [ 30 , 31 , 32 , 33 ]. Future CCM implementation efforts may thus benefit from enacting these suggestions alongside establishing and documenting CCM-based care processes and associated procedures.

Second, efforts to implement CCM-aligned practices into routine care should account for the extent to which sites’ more fundamental operational needs are met or being addressed. That information can be used to appropriately scope the plan, expectations, and timeline for implementation. For instance, ongoing critical staffing shortages or high turnover [ 34 ] at a site are unlikely to be resolved through a few months of CCM implementation. In fact, in that situation, it is possible that CCM implementation efforts could lead to reduced team effectiveness in the short term, given the effort required to establish more collaborative and coordinated care processes [ 35 ]. Should CCM implementation move forward at a given site, implementation goals ought to be set on making progress in realms that are within the implementation effort’s control (e.g., designing CCM-aligned practices that take staffing challenges into consideration) [ 36 , 37 ] rather than on factors outside of the effort’s control (e.g., staffing shortages). As healthcare systems determine how to deploy support (e.g., facilitators) to sites for CCM implementation, they would benefit from considering whether it is primarily CCM expertise that the site needs at the moment, or more foundational organizational resources (e.g., mental health staffing, clinical space, leadership enhancement) [ 38 ] to first reach an operational state that can most benefit from CCM implementation efforts at a later point in time. There is growing consensus across the field that the readiness of a healthcare organization to innovate is a prerequisite to successful innovation (e.g., CCM implementation) regardless of the specific innovation [ 39 , 40 ]. Several promising strategies specifically target these organizational considerations for implementing evidence-based practices (e.g., [ 41 , 42 ]). Further, recent works have begun to more clearly delineate leadership-related, climate-related, and other contextual factors that contribute to organizations’ innovation readiness [ 43 ], which can inform healthcare systems’ future decisions regarding preparatory work leading to, and timing of, CCM implementation at their sites.

These considerations informed by MMCS may have useful implications for implementation strategy selection and tailoring for future CCM implementation efforts, especially in delineating the target level (e.g., system, organizational, clinic, individual) and timeline of implementation strategies to be deployed. For instance, of the three factors found to most notably trend with CCM sustainability, Collaborativeness and teamwork may be strengthened through shorter-term team-building interventions at the organizational and/or clinic levels [ 38 ], Turnover of clinic staff and leadership may be mitigated by aiming for longer-term culture/climate change at the system and/or organizational levels [ 44 , 45 , 46 ], and Having a consistent and strong internal facilitator may be ensured more immediately by selecting an individual with fitting expertise/characteristics to serve in the role [ 15 ] and imparting innovation/facilitation knowledge to them [ 47 ]. Which of these factors to focus on, and through what specific strategies, can be decided in partnership with an implementation site—for instance, candidate strategies can be identified based on ones that literature points to for addressing these factors [ 48 ], systematic selection of the strategies to move forward can happen with close input from site personnel [ 49 ], and explicit further specification of those strategies [ 50 ] can also happen in collaboration with site personnel to amply account for site-specific contexts [ 51 ].

As is common for implementation projects, the findings of this project are highly context-dependent. It involves the implementation of a specific evidence-based practice (the CCM) using a specific implementation strategy (implementation facilitation) at specific sites (BHIP teams within general mental health clinics at nine VA medical centers). For such context-dependent findings to be transferable [ 52 , 53 ] to meaningfully inform future implementation efforts, sources of variation in the findings and how the findings were reached must be documented and traceable. This means being explicit about each step and decision that led up to cross-site analysis, as MMCS encourages, so that future implementation efforts can accurately view and consider why and how findings might be transferable to their own work. For instance, beyond the finding that Turnover of clinic staff and leadership was a factor present at most of the examined sites, MMCS’ traceable documentation of qualitative data associated with this factor at high sustainability sites also allowed highlighting the perception that CCM implementation is contributing to mitigating turnover of providers in the clinic over time, which may be a crucial piece of information that fuels future CCM implementation efforts.

Furthermore, to compare findings and interpretations across projects, consistent procedures for setting up and conducting these multi-site investigations are indispensable [ 54 , 55 , 56 ]. Although many projects involve multiple sites and assess variations across the sites, it is less common to have clearly delineated protocols for conducting such assessments. MMCS is meant to target this very gap, by offering a formalized sequence of steps that prompt specification of analytical procedures and decisions that are often interpretive and left less specified. MMCS uses a concrete data structure (the matrix) to traceably organize information and knowledge gained from a project, and the matrix can accommodate various data sources and conceptual groundings (e.g., guiding theories, models, and frameworks) that may differ from project to project – for instance, although our application of MMCS aligned to i-PARIHS, other projects applying MMCS [ 2 , 5 ] use different conceptual guides (e.g., Consolidated Framework for Implementation Research [ 57 ], Theoretical Domains Framework [ 58 ]). Therefore, as more projects align to the MMCS steps [ 1 ] to identify factors related to implementation and sustainability, better comparisons, consolidations, and transfers of knowledge between projects may become possible.

This project has several limitations. First, the high, medium, and low sustainability assigned to the sites were based on the sites’ CCM sustainability relative to one another, rather than based on an external metric of sustainability. As measures of sustainability such as the Program Sustainability Assessment Tool [ 59 , 60 ] and the Sustainment Measurement System Scale [ 61 ] become increasingly developed and tested, future projects may consider the feasibility of incorporating such measures to assess each site’s sustainability. In our case, we worked on addressing this limitation by using a consensus approach within our project team to assign sustainability levels to sites, as well as by confirming that the sites that we designated as high sustainability exhibited CCM elements that we had previously observed at the end of their participation in the original CCM implementation trial [ 19 ]. Second, we did not assign strict thresholds above/below which the counts or proportions of data regarding a factor would automatically indicate whether the factor (i) was present, somewhat present, or minimally present and (ii) had an enabling, hindering, or neutral/unclear influence on sustainability. This follows widely accepted qualitative analytical guidance that discourages characterizing findings solely based on the frequency with which a notion is mentioned by participants [ 62 , 63 , 64 ], in order to prevent unsubstantiated inferences or conclusions. We sought to address this limitation in two ways: We carefully documented the project team’s rationale for each consensus reached, and we reviewed all consensuses reached in their entirety to ensure that any two factors with the same designation (e.g., “minimally present”) do not have associated rationale that conflict across those factors. These endeavors we undertook closely adhere to established case study research methods [ 65 ], which MMCS builds on, that emphasize strengthening the validity and reliability of findings through documenting a detailed analytic protocol, as well as reviewing data to ensure that patterns match across analytic units (e.g., factors, interviewees, sites). Third, our findings are based on three sites each for high/medium/low sustainability, and although we identified single factors associated with sustainability, we found no specific combinations of factors’ presence and influence that were repeatedly existent at a majority of the sites designated as high/medium/low sustainability. Examining additional sites on the factors identified through this work (as we will for our subsequent CCM implementation trial described below) will allow more opportunities for repeated combinations and other factors to emerge, making possible firmer conclusions regarding the extent to which the currently identified factors and absence of identified combinations are applicable beyond the sites included in this study. Fourth, the identified influencing factor “leadership support for CCM” (under the Context domain of the i-PARIHS framework) substantially overlaps in concept with the core “organizational/leadership support” element of the CCM. To avoid circular reasoning, we used leadership support-related data to inform our assignment of sites’ high, medium, or low CCM sustainability, rather than as a reason for the sites’ CCM sustainability. In reality, strong leadership support may both result from and contribute to implementation and sustainability [ 16 , 66 ], and thus causal relationships between the i-PARIHS-aligned influencing factors and the CCM elements (possibly with feedback loops) warrant further examination to most appropriately use leadership support-related data in future analyses of CCM sustainability. Fifth, findings may be subject to both social desirability bias in participants providing more positive than negative evidence of sustainability (especially participants who are responsible for implementing and sustaining CCM-aligned care at their site) and the project team members’ bias in interpreting the findings to align to their expectations of further effort being necessary to sustainably implement the CCM. To help mitigate this challenge, the project interviewers strove to elicit from participants both positive and negative perceptions and experiences related to CCM-based care delivery, both of which were present in the examined interview data.

Future work stemming from this project is twofold. Regarding CCM implementation, we will conduct a subsequent CCM implementation trial involving eight new sites to prospectively examine how implementation facilitation with an enhanced focus on these findings affects CCM sustainability. We started planning for sustainability prior to implementation, looking to this work for indicators of specific modifications needed to the previous way in which we used implementation facilitation to promote the uptake of CCM-based care [ 67 ]. Findings from this work suggest that sustainability may be related most strongly to (i) provider collaboration, (ii) knowledge retention during staff/leadership transitions, and (iii) availability of skilled internal facilitators. Hence, we will accordingly prioritize developing procedures for (i) regular CCM-related information exchange amongst BHIP team members, as well as between the BHIP team and clinic leadership, (ii) both translating knowledge to and keeping knowledge documented at the site, and (iii) supporting the sites’ own personnel to take the lead in driving CCM implementation.

Regarding MMCS, we will continuously refine and improve the method by learning from other projects applying, testing, and critiquing MMCS. Outside of our CCM-related projects, examinations of implementation data using MMCS are actively underway for various implementation efforts including that of a data dashboard for decision support on transitioning psychiatrically stable patients from specialty mental health to primary care [ 2 ], a peer-led healthy lifestyle intervention for individuals with serious mental illness [ 3 ], screening programs for intimate partner violence [ 4 ], and a policy- and organization-based health system strengthening intervention to improve health systems in sub-Saharan Africa [ 5 ]. As MMCS is used by more projects that differ from one another in their specific outcome of interest, and especially in light of our MMCS application that examines factors related to sustainability, we are curious whether certain proximal to distal outcomes are more subject to heterogeneity in influencing factors than other outcomes. For instance, sustainability outcomes, which are tracked following a longer passage of time than some other outcomes, may be subject to more contextual variations that occur over time and thus could particularly benefit from being examined using MMCS. We will also explore MMCS’ complementarity with coincidence analysis and other configurational analytical approaches [ 68 ] for examining implementation phenomena. We are excited about both the step-by-step traceability that MMCS can bring to such methods and those methods’ computational algorithms that can be beneficial to incorporate into MMCS for projects with larger numbers of sites. For example, Salvati and colleagues [ 69 ] described both the inspiration that MMCS provided in structuring their data as well as how they addressed MMCS’ visualization shortcomings through their innovative data matrix heat mapping, which led to their selection of specific factors to include in their subsequent coincidence analysis. Coincidence analysis is an enhancement to qualitative comparative analysis and other configurational analytical methods, in that it is formulated specifically for causal inference [ 70 ]. Thus, in considering improved reformulations of MMCS’ steps to better characterize examined factors as explicit causes to the outcomes of interest, we are inspired by and can draw on coincidence analysis’ approach to building and evaluating causal chains that link factors to outcomes. Relatedly, we have begun to actively consider the potential contribution that MMCS can make to hypothesis generation and theory development for implementation science. As efforts to understand the mechanisms through which implementation strategies work are gaining momentum [ 71 , 72 , 73 ], there is an increased need for methods that help decompose our understanding of factors that influence the mechanistic pathways from strategies to outcomes [ 74 ]. Implementation science is facing the need to develop theories, beyond frameworks, which delineate hypotheses for observed implementation phenomena that can be subsequently tested [ 75 ]. The methodical approach that MMCS offers can aid this important endeavor, by enabling data curation and examination of pertinent factors in a consistent way that allows meaningful synthesis of findings across sites and studies. We see these future directions as concrete steps toward elucidating the factors related to sustainable implementation of EBPs, especially leveraging data from projects where the number of sites is much smaller than the number of factors that may matter—which is indeed the case for most implementation projects.

Using MMCS, we found that provider collaboration, knowledge retention during staff/leadership transitions, and availability of skilled internal facilitators may be most strongly related to CCM sustainability in VA outpatient mental health clinics. Informed by these findings, we have a subsequent CCM implementation trial underway to prospectively test whether increasing the aforementioned factors within implementation facilitation enhances sustainability. The MMCS steps used here for systematic multi-site examination can also be applied to determining sustainability-related factors relevant to various other EBPs and implementation contexts.

Availability of data and materials

The data analyzed during the current project are not publicly available because participant privacy could be compromised.

Abbreviations

Behavioral Health Interdisciplinary Program

Collaborative Chronic Care Model

Consolidated Criteria for Reporting Qualitative Research

coronavirus disease

evidence-based practice

Institutional Review Board

Integrated Promoting Action on Research Implementation in Health Services

Matrixed Multiple Case Study

United States Department of Veterans Affairs

Kim B, Sullivan JL, Ritchie MJ, Connolly SL, Drummond KL, Miller CJ, et al. Comparing variations in implementation processes and influences across multiple sites: What works, for whom, and how? Psychiatry Res. 2020;283:112520.

Article   PubMed   Google Scholar  

Hundt NE, Yusuf ZI, Amspoker AB, Nagamoto HT, Kim B, Boykin DM, et al. Improving the transition of patients with mental health disorders back to primary care: A protocol for a partnered, mixed-methods, stepped-wedge implementation trial. Contemp Clin Trials. 2021;105:106398.

Tuda D, Bochicchio L, Stefancic A, Hawes M, Chen J-H, Powell BJ, et al. Using the matrixed multiple case study methodology to understand site differences in the outcomes of a Hybrid Type 1 trial of a peer-led healthy lifestyle intervention for people with serious mental illness. Transl Behav Med. 2023;13(12):919–27.

Adjognon OL, Brady JE, Iverson KM, Stolzmann K, Dichter ME, Lew RA, et al. Using the Matrixed Multiple Case Study approach to identify factors affecting the uptake of IPV screening programs following the use of implementation facilitation. Implement Sci Commun. 2023;4(1):145.

Article   PubMed   PubMed Central   Google Scholar  

Seward N, Murdoch J, Hanlon C, Araya R, Gao W, Harding R, et al. Implementation science protocol for a participatory, theory-informed implementation research programme in the context of health system strengthening in sub-Saharan Africa (ASSET-ImplementER). BMJ Open. 2021;11(7):e048742.

Bauer MS, Miller C, Kim B, Lew R, Weaver K, Coldwell C, et al. Partnering with health system operations leadership to develop a controlled implementation trial. Implement Sci. 2016;11:22.

Bauer MS, Miller CJ, Kim B, Lew R, Stolzmann K, Sullivan J, et al. Effectiveness of implementing a Collaborative Chronic Care Model for clinician teams on patient outcomes and health status in mental health: a randomized clinical trial. JAMA Netw Open. 2019;2(3):e190230.

Ritchie MJ, Dollar KM, Miller CJ, Smith JL, Oliver KA, Kim B, et al. Using Implementation Facilitation to Improve Healthcare (Version 3): Veterans Health Administration, Behavioral Health Quality Enhancement Research Initiative (QUERI). 2020.

Google Scholar  

Bauer MS, Stolzmann K, Miller CJ, Kim B, Connolly SL, Lew R. Implementing the Collaborative Chronic Care Model in mental health clinics: achieving and sustaining clinical effects. Psychiatr Serv. 2021;72(5):586–9.

Miller CJ, Kim B, Connolly SL, Spitzer EG, Brown M, Bailey HM, et al. Sustainability of the Collaborative Chronic Care Model in outpatient mental health teams three years post-implementation: a qualitative analysis. Adm Policy Ment Health. 2023;50(1):151–9.

Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349–57.

Von Korff M, Gruman J, Schaefer J, Curry SJ, Wagner EH. Collaborative management of chronic illness. Ann Intern Med. 1997;127(12):1097–102.

Article   Google Scholar  

Wagner EH, Austin BT, Von Korff M. Organizing care for patients with chronic illness. Milbank Q. 1996;74(4):511–44.

Article   CAS   PubMed   Google Scholar  

Coleman K, Austin BT, Brach C, Wagner EH. Evidence on the chronic care model in the new millennium. Health Aff (Millwood). 2009;28(1):75–85.

Connolly SL, Sullivan JL, Ritchie MJ, Kim B, Miller CJ, Bauer MS. External facilitators’ perceptions of internal facilitation skills during implementation of collaborative care for mental health teams: a qualitative analysis informed by the i-PARIHS framework. BMC Health Serv Res. 2020;20(1):165.

Kim B, Sullivan JL, Drummond KL, Connolly SL, Miller CJ, Weaver K, et al. Interdisciplinary behavioral health provider perceptions of implementing the Collaborative Chronic Care Model: an i-PARIHS-guided qualitative study. Implement Sci Commun. 2023;4(1):35.

Harvey G, Kitson A. PARIHS revisited: from heuristic to integrated framework for the successful implementation of knowledge into practice. Implement Sci. 2016;11:33.

Hsieh HF, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. 2005;15(9):1277–88.

Sullivan JL, Kim B, Miller CJ, Elwy AR, Drummond KL, Connolly SL, et al. Collaborative Chronic Care Model implementation within outpatient behavioral health care teams: qualitative results from a multisite trial using implementation facilitation. Implement Sci Commun. 2021;2(1):33.

Miller CJ, Sullivan JL, Kim B, Elwy AR, Drummond KL, Connolly S, et al. Assessing collaborative care in mental health teams: qualitative analysis to guide future implementation. Adm Policy Ment Health. 2019;46(2):154–66.

Miles MB, Huberman AM. Qualitative data analysis: an expanded sourcebook: sage. 1994.

Jones J, Hunter D. Consensus methods for medical and health services research. BMJ. 1995;311(7001):376–80.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Bens I. Facilitating with Ease!: core skills for facilitators, team leaders and members, managers, consultants, and trainers. Hoboken: John Wiley & Sons; 2017.

Ritchie MJ, Drummond KL, Smith BN, Sullivan JL, Landes SJ. Development of a qualitative data analysis codebook informed by the i-PARIHS framework. Implement Sci Commun. 2022;3(1):98.

Excel: Microsoft. Available from: https://www.microsoft.com/en-us/microsoft-365/excel . Accessed 15 Feb 2024.

Madrigal L, Manders OC, Kegler M, Haardörfer R, Piper S, Blais LM, et al. Inner and outer setting factors that influence the implementation of the National Diabetes Prevention Program (National DPP) using the Consolidated Framework for Implementation Research (CFIR): a qualitative study. Implement Sci Commun. 2022;3(1):104.

Wilson HK, Wieler C, Bell DL, Bhattarai AP, Castillo-Hernandez IM, Williams ER, et al. Implementation of the Diabetes Prevention Program in Georgia Cooperative Extension According to RE-AIM and the Consolidated Framework for Implementation Research. Prev Sci. 2023;Epub ahead of print.

Proctor E, Luke D, Calhoun A, McMillen C, Brownson R, McCrary S, et al. Sustainability of evidence-based healthcare: research agenda, methodological advances, and infrastructure support. Implement Sci. 2015;10:88.

Fathi LI, Walker J, Dix CF, Cartwright JR, Joubert S, Carmichael KA, et al. Applying the Integrated Sustainability Framework to explore the long-term sustainability of nutrition education programmes in schools: a systematic review. Public Health Nutr. 2023;26(10):2165–79.

Guptill J. Knowledge management in health care. J Health Care Finance. 2005;31(3):10–4.

PubMed   Google Scholar  

Gammelgaard J. Why not use incentives to encourage knowledge sharing. J Knowledge Manage Pract. 2007;8(1):115–23.

Liebowitz J. Knowledge retention: strategies and solutions. Boca Raton: CRC Press; 2008.

Ensslin L, CarneiroMussi C, RolimEnsslin S, Dutra A, Pereira Bez Fontana L. Organizational knowledge retention management using a constructivist multi-criteria model. J Knowledge Manage. 2020;24(5):985–1004.

Peterson AE, Bond GR, Drake RE, McHugo GJ, Jones AM, Williams JR. Predicting the long-term sustainability of evidence-based practices in mental health care: an 8-year longitudinal analysis. J Behav Health Serv Res. 2014;41(3):337–46.

Miller CJ, Griffith KN, Stolzmann K, Kim B, Connolly SL, Bauer MS. An economic analysis of the implementation of team-based collaborative care in outpatient general mental health clinics. Med Care. 2020;58(10):874–80.

Silver SA, Harel Z, McQuillan R, Weizman AV, Thomas A, Chertow GM, et al. How to begin a quality improvement project. Clin J Am Soc Nephrol. 2016;11(5):893–900.

Dixon-Woods M. How to improve healthcare improvement-an essay by Mary Dixon-Woods. BMJ. 2019;367:l5514.

Miller CJ, Kim B, Silverman A, Bauer MS. A systematic review of team-building interventions in non-acute healthcare settings. BMC Health Serv Res. 2018;18(1):146.

Robert G, Greenhalgh T, MacFarlane F, Peacock R. Organisational factors influencing technology adoption and assimilation in the NHS: a systematic literature review. Report for the National Institute for Health Research Service Delivery and Organisation programme. London; 2009.

Kelly CJ, Young AJ. Promoting innovation in healthcare. Future Healthc J. 2017;4(2):121–5.

PubMed   PubMed Central   Google Scholar  

Aarons GA, Ehrhart MG, Farahnak LR, Hurlburt MS. Leadership and organizational change for implementation (LOCI): a randomized mixed method pilot study of a leadership and organization development intervention for evidence-based practice implementation. Implement Sci. 2015;10:11.

Ritchie MJ, Parker LE, Kirchner JE. Facilitating implementation of primary care mental health over time and across organizational contexts: a qualitative study of role and process. BMC Health Serv Res. 2023;23(1):565.

van den Hoed MW, Backhaus R, de Vries E, Hamers JPH, Daniëls R. Factors contributing to innovation readiness in health care organizations: a scoping review. BMC Health Serv Res. 2022;22(1):997.

Melnyk BM, Hsieh AP, Messinger J, Thomas B, Connor L, Gallagher-Ford L. Budgetary investment in evidence-based practice by chief nurses and stronger EBP cultures are associated with less turnover and better patient outcomes. Worldviews Evid Based Nurs. 2023;20(2):162–71.

Jacob RR, Parks RG, Allen P, Mazzucca S, Yan Y, Kang S, et al. How to “start small and just keep moving forward”: mixed methods results from a stepped-wedge trial to support evidence-based processes in local health departments. Front Public Health. 2022;10:853791.

Aarons GA, Conover KL, Ehrhart MG, Torres EM, Reeder K. Leader-member exchange and organizational climate effects on clinician turnover intentions. J Health Organ Manag. 2020;35(1):68–87.

Kirchner JE, Ritchie MJ, Pitcock JA, Parker LE, Curran GM, Fortney JC. Outcomes of a partnered facilitation strategy to implement primary care-mental health. J Gen Intern Med. 2014;29 Suppl 4(Suppl 4):904–12.

Strategy Design: CFIR research team-center for clinical management research. Available from: https://cfirguide.org/choosing-strategies/ . Accessed 15 Feb 2024.

Kim B, Wilson SM, Mosher TM, Breland JY. Systematic decision-making for using technological strategies to implement evidence-based interventions: an illustrated case study. Front Psychiatry. 2021;12:640240.

Proctor EK, Powell BJ, McMillen JC. Implementation strategies: recommendations for specifying and reporting. Implement Sci. 2013;8:139.

Lewis CC, Scott K, Marriott BR. A methodology for generating a tailored implementation blueprint: an exemplar from a youth residential setting. Implement Sci. 2018;13(1):68.

Maher C, Hadfield M, Hutchings M, de Eyto A. Ensuring rigor in qualitative data analysis: a design research approach to coding combining NVivo with traditional material methods. Int J Qual Methods. 2018;17(1):1609406918786362.

Holloway I. A-Z of qualitative research in healthcare. 2nd ed. Oxford: Wiley-Blackwell; 2008.

Reproducibility and Replicability in Research: National Academies. Available from: https://www.nationalacademies.org/news/2019/09/reproducibility-and-replicability-in-research . Accessed 15 Feb 2024.

Chinman M, Acosta J, Ebener P, Shearer A. “What we have here, is a failure to [Replicate]”: ways to solve a replication crisis in implementation science. Prev Sci. 2022;23(5):739–50.

Vicente-Saez R, Martinez-Fuentes C. Open Science now: a systematic literature review for an integrated definition. J Bus Res. 2018;88:428–36.

Consolidated Framework for Implementation Research: CFIR Research Team-Center for Clinical Management Research. Available from: https://cfirguide.org/ . Accessed 15 Feb 2024.

Atkins L, Francis J, Islam R, O’Connor D, Patey A, Ivers N, et al. A guide to using the Theoretical Domains Framework of behaviour change to investigate implementation problems. Implement Sci. 2017;12(1):77.

Luke DA, Calhoun A, Robichaux CB, Elliott MB, Moreland-Russell S. The Program Sustainability Assessment Tool: a new instrument for public health programs. Prev Chronic Dis. 2014;11:130184.

Calhoun A, Mainor A, Moreland-Russell S, Maier RC, Brossart L, Luke DA. Using the Program Sustainability Assessment Tool to assess and plan for sustainability. Prev Chronic Dis. 2014;11:130185.

Palinkas LA, Chou CP, Spear SE, Mendon SJ, Villamar J, Brown CH. Measurement of sustainment of prevention programs and initiatives: the sustainment measurement system scale. Implement Sci. 2020;15(1):71.

Sandelowski M. Real qualitative researchers do not count: the use of numbers in qualitative research. Res Nurs Health. 2001;24(3):230–40.

Wood M, Christy R. Sampling for Possibilities. Qual Quant. 1999;33(2):185–202.

Chang Y, Voils CI, Sandelowski M, Hasselblad V, Crandell JL. Transforming verbal counts in reports of qualitative descriptive studies into numbers. West J Nurs Res. 2009;31(7):837–52.

Yin RK. Case study research and applications. Los Angeles: Sage; 2018.

Bauer MS, Weaver K, Kim B, Miller C, Lew R, Stolzmann K, et al. The Collaborative Chronic Care Model for mental health conditions: from evidence synthesis to policy impact to scale-up and spread. Med Care. 2019;57 Suppl 10 Suppl 3(10 Suppl 3):S221-s7.

Miller CJ, Sullivan JL, Connolly SL, Richardson EJ, Stolzmann K, Brown ME, et al. Adaptation for sustainability in an implementation trial of team-based collaborative care. Implement Res Pract. 2024;5:26334895231226197.

Curran GM, Smith JD, Landsverk J, Vermeer W, Miech EJ, Kim B, et al. Design and analysis in dissemination and implementation research. In: Brownson RC, Colditz GA, Proctor EK, editors. Dissemination and Implementation Research in Health: Translating Science to Practice. 3 ed. New York: Oxford University Press; In press.

Salvati ZM, Rahm AK, Williams MS, Ladd I, Schlieder V, Atondo J, et al. A picture is worth a thousand words: advancing the use of visualization tools in implementation science through process mapping and matrix heat mapping. Implement Sci Commun. 2023;4(1):43.

Whitaker RG, Sperber N, Baumgartner M, Thiem A, Cragun D, Damschroder L, et al. Coincidence analysis: a new method for causal inference in implementation science. Implement Sci. 2020;15(1):108.

Lewis CC, Powell BJ, Brewer SK, Nguyen AM, Schriger SH, Vejnoska SF, et al. Advancing mechanisms of implementation to accelerate sustainable evidence-based practice integration: protocol for generating a research agenda. BMJ Open. 2021;11(10):e053474.

Kilbourne AM, Geng E, Eshun-Wilson I, Sweeney S, Shelley D, Cohen DJ, et al. How does facilitation in healthcare work? Using mechanism mapping to illuminate the black box of a meta-implementation strategy. Implement Sci Commun. 2023;4(1):53.

Kim B, Cruden G, Crable EL, Quanbeck A, Mittman BS, Wagner AD. A structured approach to applying systems analysis methods for examining implementation mechanisms. Implementation Sci Commun. 2023;4(1):127.

Geng EH, Baumann AA, Powell BJ. Mechanism mapping to advance research on implementation strategies. PLoS Med. 2022;19(2):e1003918.

Luke DA, Powell BJ, Paniagua-Avila A. Bridges and mechanisms: integrating systems science thinking into implementation research. Annu Rev Public Health. In press.

Download references

Acknowledgements

The authors sincerely thank the project participants for their time, as well as the project team members for their guidance and support. The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs or the United States government.

This project was funded by VA grant QUE 20–026 and was designed and conducted in partnership with the VA Office of Mental Health and Suicide Prevention.

Author information

Authors and affiliations.

Center for Healthcare Organization and Implementation Research (CHOIR), VA Boston Healthcare System, 150 South Huntington Avenue, Boston, MA, 02130, USA

Bo Kim, Madisen E. Brown, Samantha L. Connolly, Elizabeth G. Spitzer, Hannah M. Bailey & Christopher J. Miller

Harvard Medical School, 25 Shattuck Street, Boston, MA, 02115, USA

Bo Kim, Samantha L. Connolly & Christopher J. Miller

Center of Innovation in Long Term Services and Supports (LTSS COIN), VA Providence Healthcare System, 385 Niagara Street, Providence, RI, 02907, USA

Jennifer L. Sullivan

Brown University School of Public Health, 121 South Main Street, Providence, RI, 02903, USA

VA Rocky Mountain Mental Illness Research, Education and Clinical Center (MIRECC), 1700 N Wheeling Street, Aurora, CO, 80045, USA

Elizabeth G. Spitzer

VA Northeast Program Evaluation Center, 950 Campbell Avenue, West Haven, CT, 06516, USA

Lauren M. Sippel

Geisel School of Medicine at Dartmouth, 1 Rope Ferry Road, Hanover, NH, 03755, USA

VA Office of Mental Health and Suicide Prevention, 810 Vermont Avenue NW, Washington, DC, 20420, USA

Kendra Weaver

You can also search for this author in PubMed   Google Scholar

Contributions

Concept and design: BK, JS, and CM. Acquisition, analysis, and/or interpretation of data: BK, JS, MB, SC, ES, and CM. Initial drafting of the manuscript: BK. Critical revisions of the manuscript for important intellectual content: JS, MB, SC, ES, HB, LS, KW, and CM. All the authors read and approved the final manuscript.

Corresponding author

Correspondence to Bo Kim .

Ethics declarations

Ethics approval and consent to participate.

This project was determined to be non-research by the VA Boston Research and Development Service, and therefore did not require oversight by the Institutional Review Board (IRB).

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1..

COREQ (COnsolidated criteria for REporting Qualitative research) Checklist.

Additional file 2.

Data input, tasks performed, and analysis output for MMCS Steps 5 through 9.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Kim, B., Sullivan, J.L., Brown, M.E. et al. Sustaining the collaborative chronic care model in outpatient mental health: a matrixed multiple case study. Implementation Sci 19 , 16 (2024). https://doi.org/10.1186/s13012-024-01342-2

Download citation

Received : 14 June 2023

Accepted : 21 January 2024

Published : 19 February 2024

DOI : https://doi.org/10.1186/s13012-024-01342-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Collaborative care
  • Implementation
  • Interdisciplinary care
  • Mental health
  • Sustainability

Implementation Science

ISSN: 1748-5908

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

case study benchmark analysis

Benchmark GIM Case Analysis Paper

Comprehensive analysis of energy efficiency and performance of ARM and RISC-V SoCs

  • Open access
  • Published: 20 February 2024

Cite this article

You have full access to this open access article

  • Daniel Suárez 1   na1 ,
  • Francisco Almeida 1   na1 &
  • Vicente Blanco 1  

Over the past few years, ARM has been the dominant player in embedded systems and System-on-Chips (SoCs). With the emergence of hardware platforms based on the RISC-V architecture, a practical comparison focusing on their energy efficiency and performance is needed. In this study, our goal is to comprehensively evaluate the energy efficiency and performance of ARM and RISC-V SoCs in three different systems. We will conduct benchmark tests to measure power consumption and overall system performance. The results of our study are valuable to developers and researchers looking for the most appropriate hardware platform for energy-efficient computing applications. Our observations suggest that RISC-V Instruction Set Architecture (ISA) implementations may demonstrate lower average power consumption than ARM, but this does not automatically imply a superior performance per watt ratio for RISC-V. The primary focus of the study is to evaluate and compare these ISA implementations, aiming to identify potential areas for enhancing their energy efficiency. Furthermore, to ensure the practical applicability of our findings, we will use the Computational Fluid Dynamics software OpenFOAM. This step serves to validate the relevance of our results in real-world scenarios. It allows us to fine-tune execution parameters based on the insights gained from our initial study. By doing so, we aim not only to provide meaningful conclusions but also to investigate the transferability of our results to practical applications. Our analysis will also scrutinize the capabilities of these SoCs when handling nonsynthetic software workloads, thereby broadening the scope of our evaluation.

Avoid common mistakes on your manuscript.

1 Introduction

Modern high-performance computing (HPC) systems exhibit substantial energy demands when operating at peak capacity. To illustrate, consider the top five systems listed on the Top500 [ 1 ] list, which consume between 15 and 21 megawatts of power, exclusive of the energy required for cooling infrastructure. This level of energy consumption presents significant challenges both economically and environmentally. The search for solutions to improve energy efficiency has become a priority for the HPC community.

In response to the evolving demands in computing, there is growing interest in alternative hardware platforms that implement different Instruction Set Architectures (ISAs) with superior performance per watt. RISC-V [ 2 ] and ARM [ 3 ] are particularly noteworthy in this context. Both offer advantages over traditional architectures such as x86, including simplicity, scalability, and a reduced instruction set. In particular, RISC-V distinguishes itself as an open specification.

The primary objective of this document is to perform a comprehensive comparison of the energy efficiency and performance of hardware platforms implementing RISC-V and ARM ISAs. It is essential to understand that while ISAs are fundamentally software constructs, they indirectly influence the performance and power consumption of the systems through their hardware implementations. Factors such as instruction efficiency, the complexity of executing certain commands, and the overall architectural design associated with an ISA implementation can significantly impact a system’s performance and power usage. Our study aims to delve into these dynamics by evaluating SoCs based on ARM and RISC-V across various tests.

To achieve this goal, we will perform and analyze a range of benchmark tests to assess the performance and energy efficiency of these hardware implementations. The findings of this study will be invaluable to developers and researchers who are in the process of selecting a hardware platform for energy-efficient computing applications. Ultimately, our aim is to provide a thorough and rigorous analysis of hardware platforms implementing RISC-V and ARM ISAs and to determine which of these is more suitable for energy-efficient computing.

Expanding on this objective, we also look at the practical application of these architectures in the context of OpenFOAM [ 4 ], a Computational Fluid Dynamics (CFD) software that plays a pivotal role in industries ranging from aerospace engineering to automotive engineering. It allows simulation of fluid flow and heat transfer phenomena, helping to optimize designs, predict performance, and solve complex fluid dynamics problems.

By incorporating OpenFOAM into our analysis, we aim to validate the applicability of our findings in the real world. This validation process not only helps us fine-tune the execution parameters but also underscores the importance of our results in solving practical problems. Furthermore, it allows us to assess the capabilities of these System-on-Chips (SoCs) when tackling compute-intensive applications like OpenFOAM.

The structure of this work is divided as follows: In Sect.  2 , we provide a brief overview of the current state of energy efficiency studies in RISC-V architectures. In Sect.  3 , we detail the methodology used and the benchmarks conducted in our comparative analysis.

Next, in Sect.  4 , we present the experimental results obtained from comparing all evaluated System-on-Chip (SoC) implementations on various benchmarks. Furthermore, Sect.  5 reveals the performance of our SoCs when tested with the CFD OpenFOAM application. Finally, in Sect.  6 , we present the conclusions drawn and discuss potential avenues for future work to further enhance the energy efficiency of these architectures.

Our approach is designed to advance energy-efficient computing by offering valuable insights that assist industry professionals and researchers in selecting the most appropriate hardware implementations of ISAs for their projects. This aims to maximize performance while minimizing energy consumption.

2 Related work

In recent years, there has been a growing interest in the research and analysis of energy efficiency and performance in ARM-based systems. Studies such as the one conducted by Simakov and Deleon [ 5 ] have provided valuable insights into the current state of energy efficiency in ARM architectures. In their study, they presented a comprehensive analysis of performance and energy efficiency using various benchmarks and applications running on high-performance ARM systems. These applications covered a variety of computational paradigms, including HPCC [ 6 ] (various HPC benchmarks), NWChem [ 7 ] (ab initio chemistry), OpenFOAM [ 4 ] (partial differential equation solver), GROMACS [ 8 ] (molecular simulation), AI Benchmark Alpha [ 9 ] (AI benchmark), and Enzo [ 10 ] (adaptive mesh refinement). Although ARM performance is generally considered to be slower than current x86 counterparts, it has been shown in many cases to be comparable and sometimes even surpass previous generations of x86 CPUs, as seen in [ 11 ]. Moreover, in terms of energy efficiency, considering both power consumption and execution time, ARM has proven to be more energy efficient than x86 processors in most instances. In our research, we expand this comparative analysis by introducing RISC-V architectures alongside ARM, thus broadening the scope of our investigation into energy efficiency and performance in these systems.

With our attention turned to RISC-V, studies in this field have also gained prominence. A notable study conducted by Zaruba [ 12 ] analyzed the performance and energy efficiency of a RISC-V core specifically designed for Linux systems. Using Ariane [ 13 ], an open-source implementation of the 64-bit variant of RISC-V, the results demonstrated exceptional energy efficiency, reaching up to 40 GOp/sW compared to other similar cores mentioned in the scientific literature. This study emphasized the significant role of instruction extensions in improving computational performance, rather than relying solely on high-frequency operation.

In a more recent study by Elsadek and Tawfik [ 14 ], an extensive examination of open-source RISC-V cores was conducted, categorizing them into high-performance and resource-constrained categories. Subsequently, the most optimized cores for resource-constrained devices were selected, and comparisons were made on the basis of resource utilization and energy consumption. The results of this study identified the PicoRV32 [ 15 ] core as the most energy efficient option for resource-constrained devices. These studies collectively highlight the potential of RISC-V as an open and scalable processor instruction set architecture, enabling the design of highly energy-efficient cores.

In the context of this discussion, it is pertinent to refer to a research paper [ 16 ] that offers a comparative survey of open-source RISC-V processor implementations of the application-class RISC-V. The authors of this paper conduct an in-depth analysis of the most prominent open-source RISC-V projects, assessing them using identical benchmarks and configuration settings. Their analysis includes factors such as academic impact, community engagement, technology support, evaluation platforms for both FPGA and ASIC implementations, as well as performance, area, power consumption, and energy efficiency metrics. The findings of this study identify Rocket [ 17 ] and CVA6 [ 13 ] (formerly known as Ariane) as the most successful implementations of RISC-V, with relevance to both commercial and academic projects. The insights gleaned from this research are instrumental in guiding decisions pertaining to RISC-V processor implementation across a diverse range of systems.

The combination of these studies, along with other research efforts, has generated a deeper understanding of the advantages of both ARM and RISC-V-based cores in terms of energy efficiency. This has further fueled interest in exploring and harnessing the full potential of these open and scalable architectures. However, it is essential to note that there is currently a gap in research directly comparing the energy efficiency of ARM and RISC-V architectures. This research gap serves as motivation for our study, which seeks to address this void and provide a comprehensive and up-to-date insight into the energy efficiency of both ARM and RISC-V systems. Through a detailed exploration and analysis of energy efficiency in both architectures, our aim is to make a significant contribution to existing knowledge and offer valuable insights for decision-making in the design and optimization of ARM and RISC-V-based systems.

3 Methodology

We compare the performance and energy efficiency of ARM and RISC-V-based implementations. To do this, we ran benchmark tests on three different hardware platforms with specifications shown in Table 1 .

Although the specifications of the hardware platforms differ in some aspects, we will try to be as fair as possible in our tests so that they have minimal impact on the results.

3.1 Benchmarks description

We used two benchmark tests: the NAS Benchmark [ 18 ] (version 3.2, SER) and the TFLite Benchmark [ 19 ] (models from MobileNet [ 20 ] v1 to v3).

The NAS Benchmark is a set of parallel programs that measure the performance of parallel computers. It includes different computing tasks used to evaluate the performance of various parallel algorithms. On the other hand, the TFLite Benchmark is a tool that is used to evaluate the performance of machine learning models on mobile and embedded devices. It measures how fast deep learning models converted to TensorFlow Lite format can make predictions.

In the context of NAS Benchmarks, we conducted tests utilizing the SER version, optimized for sequential execution. Due to the Nezha D1’s single-core architecture, parallel execution of algorithms on this machine is not feasible. We ran tests with Classes W, A, and B to obtain more results based on the size of the problem. The sizes of these problems can be seen in Table 2 .

In the case of TFLite, we conducted tests using models with an input size of 224 \(\times \) 224 (ranging from MobileNet v1 to v3), without any limitation on the number of threads. This approach maximized each hardware platform’s utilization of available threads on its respective processor. Additionally, we integrated the Xnnpack [ 21 ] delegate, a highly optimized library of floating-point neural network inference operators, into our testing methodology to assess its impact on energy consumption and performance.

These benchmarks serve as standard tools in computational assessments, widely recognized for gaging the performance of various computer systems, including processors, GPUs, and distributed systems.

3.2 A case of study of a real application: OpenFOAM

We harnessed the capabilities of OpenFOAM [ 4 ], an open-source Computational Fluid Dynamics (CFD) software renowned for its versatility and robust capabilities in solving complex fluid dynamics problems. OpenFOAM is an invaluable tool widely adopted across various engineering fields, including automotive, aerospace, and environmental engineering. It excels in simulating and analyzing fluid flow and heat transfer phenomena.

In our study, we specifically utilized the OpenFOAM version v1906 [ 22 ] for our evaluations. This choice of version was deliberate, as it was precompiled for the platforms we used, thereby eliminating the need for the tedious process of compiling for three different architectures. The package we employed was obtained directly from the Debian package repository, further ensuring the reliability and convenience of our computational setup. Through OpenFOAM, we conducted simulations of both the “motorBike” case and the “rotorDisk” case from official examples with minor modifications to ensure that they run sequentially instead of parallel.

These simulations are pivotal benchmarks in our study, representing real-world scenarios that require substantial computational resources. They enable us to evaluate the performance and energy efficiency of the hardware platforms for the handling of computationally intensive applications, showcasing their capabilities for addressing the challenges posed by fluid flow and heat transfer analysis.

The “motorBike” problem [ 23 ] in OpenFOAM is a simulation that computes steady flow patterns around a motorcycle and its rider, with fluid entering at a speed of 20 m/s from the “inlet” region and leaving from the “outlet” region. The motorcycle’s surface is modeled as a no-slip wall, while the ground is assigned a velocity of 20 m/s. In particular, this simulation dynamically adjusts the number of parallel subdomains based on the selected number of processing cores.

In contrast, the “rotorDisk” [ 24 ] problem in OpenFOAM involves the application of cell-based momentum sources on velocity within a specified cylindrical region to approximate the mean effects of rotor forces. Here, the fluid flows in from the “inlet” region at 1 m / s in the direction of the Y-axis, exits from the “outlet,” and features a “rotating Zone” that spins at a rate of 1000 rpm around the Y-axis.

Although OpenFOAM is not a commonly employed tool on System-on-Chips (SoCs), it is more typically associated with high-performance computing environments because of its considerable computational requirements. Consequently, this case study not only delves into the intricacies of the problem itself but also extends our understanding of how SoCs can be harnessed to address these demanding computational challenges. Moreover, this research breaks new ground by exploring the utilization of OpenFOAM in architectures such as RISC-V, where research in this context is relatively limited. This highlights the adaptability of these applications and unveils fresh opportunities for their integration into less-explored domains of computing, further emphasizing their potential within the field of SoCs.

3.3 Test execution

To ensure the impartiality of our tests, we developed Python scripts capable of running tests concurrently on all hardware platforms. These scripts supervised the testing procedures and maintained a consistent temperature across all hardware platforms throughout the testing process. Before commencing each batch of tests, the scripts continuously monitored the hardware platform’s temperature and waited until all reached a predetermined base temperature. This base temperature represented the point when the hardware platforms were in an idle state, ensuring a uniform starting point for all tests. Only after reaching this base temperature did the scripts initiate the next set of tests. This strategy improves the fairness of our testing process and acts as a preventive measure against frequency fluctuations in our hardware platforms caused by temperature increases. It ensures our results remain highly reproducible and aligned with the specified standards.

The tests were run three times on each hardware platform, and we averaged the results to get the final values. We plotted the results and performed statistical analysis to compare the energy efficiency and performance of the different hardware platforms. To acquire precise energy data for our study, we used AccelPowerCape [ 25 ], a combination of BeagleBone Black [ 26 ] and the Accelpower module [ 27 ]. This module incorporates INA219 [ 28 ] sensors to measure current, voltage, and power consumption. To achieve high-precision data collection, we used a customized version of the pmlib library [ 29 ], a server daemon purposely designed for monitoring energy consumption. This implementation was accessed through the EML [ 30 ] pmlib driver.

Our methodology involved establishing a physical connection by connecting a cable from the power source of the devices under scrutiny to the AccelPowerCape, which facilitates real-time monitoring of energy consumption on our target devices. This approach ensured the acquisition of accurate and reliable energy-related insights within the context of our study.

4 Benchmark results

This section will present the results of the NAS Benchmark and TFLite Benchmark tests on the SoCs mentioned above. We will analyze the performance and energy efficiency of these SoCs, providing a clear comparison of their capabilities.

4.1 Power consumption analysis

In the experimental results of the power consumption displayed in Fig.  1 a–d, we meticulously measured and compared the average power consumption in watts for each hardware platform. These figures specifically illustrate the average rate at which each device consumes power during the running of the benchmark tests. It provides insight into the typical power usage of each platform under test conditions, providing a clear comparison of their power consumption profiles.

Our observations reveal that the Nezha D1 consistently consumed the fewest watts in both the TFLite Benchmark and the NAS Benchmark, without necessarily implying a more efficient use of power in terms of performance per watt. In this context, we specifically refer to the raw power consumption figures, and not the effectiveness or productivity of each platform.

The Odroid XU4 ranked second in power consumption, while the Rock960 generally drew the most power, referring solely to the average power draw without considering the output.

It should be noted that the power consumption of the Nezha D1 remained stable throughout all scenarios, in contrast to the other hardware platforms, whose power consumption exhibited variations depending on the specific benchmark being executed.

Although these measurements offer crucial information on the raw power usage of each platform, they do not directly translate into assessments of power efficiency in terms of performance per watt. Performance per watt is a distinct metric that evaluates the computational output relative to energy consumption. Therefore, a lower power consumption, as observed on some platforms, does not necessarily imply a higher efficiency in this specific metric. This distinction is crucial for a comprehensive understanding of the energy characteristics of each platform.

figure 1

Power consumption results for NAS and TFLite Benchmarks

4.2 Performance analysis

In this section, we present the results of our performance analysis, which involved measuring the time each hardware platform takes to complete the tests, as detailed in Fig.  2 a–d.

Upon rigorous scrutiny of execution times, a significant disparity emerged. Nezha D1 consistently exhibited the longest execution times among the hardware platforms tested. This finding suggests that Nezha D1 may not be the most efficient choice for tasks where rapid completion is a critical requirement, which warrants consideration of alternative options.

Remarkably, Odroid XU4, while not the fastest in the whole test, demonstrated its capability by ranking as the second-slowest performer in our tests. This observation suggests that the Odroid XU4 may not excel in scenarios that require swift processing. However, it is noteworthy that the Odroid XU4 outperformed the Rock960 in the CG problem test, indicating its superior performance in specific computational tasks.

Conversely, the Rock960 emerged as the overall performance leader, surpassing the other hardware platforms evaluated in most scenarios. Its commendable speed makes the Rock960 an attractive choice for applications requiring rapid processing and execution.

figure 2

Performance results for NAS and TFLite Benchmarks

4.3 Total energy consumption analysis

In this section, we will examine and present the results focusing on the total energy consumption, quantified in joules, for each of the hardware platforms evaluated. For more granular details, we refer the reader to Fig.  3 a–d. This metric accounts for the cumulative amount of energy used by each device throughout the entire execution of the benchmark tests. It reflects the total energy expenditure, combining both the rate of power consumption and the time over which the device took to complete each test.

Our analysis reveals compelling insights into the energy consumption profiles of these hardware platforms. In particular, Nezha D1 consistently exhibited the highest total energy consumption across all benchmarks. This outcome can be attributed to the extended completion times required by the Nezha D1, resulting in greater overall energy consumption than the other hardware platforms.

In contrast, the Odroid XU4 and the Rock960 displayed similar energy consumption patterns, with occasional variations where one platform consumed marginally more energy than the other and vice versa. This observation highlights a degree of parity in energy efficiency between the Odroid XU4 and the Rock960 despite differences in their performance characteristics.

By scrutinizing the total energy consumption figures, our analysis provides valuable insight into the power profiles of these hardware platforms and aids in selecting the most suitable platform based on the specific energy constraints and requirements of the intended application. These findings underscore the importance of considering power efficiency and performance when making hardware choices in computational scenarios.

figure 3

Energy consumption measurements for NAS and TFLite Benchmarks

4.4 Operations per second analysis

In the context of the NAS Benchmark, “operations per second” refers to the number of floating-point operations executed by the hardware platform per second. In contrast, the TFLite Benchmark indicates the number of inferences conducted by the hardware platform per second.

Upon a comprehensive examination of the results of the NAS test presented in Fig.  4 a–c, it becomes evident that the hardware platform that exhibits the highest number of operations per second is Rock960. In most cases, it surpasses the Odroid XU4 in this regard. In particular, the Nezha D1, while conforming to its specified capabilities, demonstrated comparatively lower performance than its competitors.

Shifting our focus to the TFLite Benchmark results, as shown in Fig.  4 d, we can observe substantial disparities in the number of operations per second among the evaluated hardware platforms. Rock960 emerges as the best performing, consistently outperforming its rivals. This underscores the Rock960’s remarkable ability to achieve more inferences per second, rendering it as an enticing choice for applications necessitating swift data processing. On the contrary, Nezha D1 consistently exhibited a lower number of operations per second than the other hardware platforms, indicating a substantial deficit.

It should be noted that the inclusion of the Xnnpack delegate did not appear to significantly influence the overall results, suggesting that its impact on the number of operations per second was relatively negligible within the scope of these evaluations.

These insights from our analyses offer valuable guidance for selecting the most suitable hardware platform, considering the number of operations per second as a critical performance metric. Such considerations are crucial in a broad spectrum of computational applications, where optimizing resource utilization and achieving the desired level of operations per second are important.

figure 4

Operations per second measurements for NAS and TFLite Benchmarks

4.5 Energy efficiency analysis

In the NAS and TFLite Benchmarks, it is important to distinguish the concept of “energy efficiency.” In the NAS Benchmark, this term refers to the number of floating-point operations performed per second per watt, while in the TFLite Benchmark, it implies the number of inferences made per second per watt.

Examining the results of the NAS test presented in Fig.  4 a–c, it is apparent that the energy efficiency varies between the evaluated hardware platforms. In particular, the Odroid XU4 and the Rock960 exhibit competitive energy efficiency metrics in different tests. On the contrary, the Nezha D1 consistently displays a lower energy efficiency than the other hardware platforms.

A more detailed exploration of the results of the TFLite test in Fig.  5 d reveals a consistent pattern. Odroid XU4 emerges as the hardware platform with superior energy efficiency in this benchmark, although it does not have the highest raw performance in this test. This observation adds an intriguing dimension to our findings, highlighting that raw performance does not always correlate directly with energy efficiency. On the contrary, Nezha D1 consistently lags in energy efficiency, indicating a notable shortfall in this crucial metric compared to its counterparts. Additionally, it should be noted that including the Xnnpack delegate did not substantially influence the results, occasionally yielding lower energy efficiency results than its counterpart without using this delegate.

These results in terms of energy efficiency might seem surprising when considering that they do not align with the average power consumption observed for each hardware platform, as seen in Fig.  1 a–d. This discrepancy arises because although one hardware platform may have consumed less power on average, if the number of operations performed per second is significantly lower, the energy efficiency will also be reduced. This highlights the importance of not only evaluating power consumption in isolation but also considering the overall performance in terms of operations per second to gain a true understanding of energy efficiency.

These insights from our analyses offer valuable guidance for selecting the most suitable hardware platform, focusing on energy efficiency as a pivotal metric. Such considerations are paramount in the context of a broad spectrum of computational applications where optimizing resource utilization and achieving the desired level of energy efficiency are critical objectives.

figure 5

FLOPS per watt measurements for NAS and TFLite Benchmarks

4.6 Temperature analysis

In pursuing a comprehensive assessment of hardware platforms, we recognized the critical importance of incorporating temperature measurements into our analysis. Temperature, often overlooked but profoundly influential, can significantly affect a machine’s frequency, performance, and energy efficiency, making it pivotal for understanding how these platforms behave under different conditions. Detailed results of these temperature measurements, available in Fig.  6 a–d, offer insight into how each platform’s temperature fluctuated during our experiments, providing valuable context for interpreting hardware performance under varying workloads and environmental conditions.

In particular, Nezha D1 consistently maintained the lowest temperatures in all tests, with modest temperature increases compared to other platforms. This suggests efficient thermal management. Surprisingly, despite active cooling fans, the Odroid XU4 experienced significant temperature spikes, likely due to its higher number of CPU cores. The Rock960 recorded the second-lowest temperatures, highlighting its thermal efficiency.

We conducted prestress tests to establish the upper temperature threshold at which these platforms could operate safely without any performance degradation. This limit is visually represented by the distinctive dashed lines within the figures. As we can see in the results, it was never reached during our experiments, which confirms the absence of thermal bottlenecks.

figure 6

Temperature measurements for NAS and TFLite Benchmarks

5 Computational results for the OpenFOAM case study

We conducted experiments using OpenFOAM software to validate the transferability of our benchmark results to a real-world application. The results of these experiments are visually depicted in Fig.  7 a–d, which include metrics related to power consumption (referred to as “Power OpenFOAM”), performance (“Performance OpenFOAM”), energy consumption (“Energy OpenFOAM”), and temperature measurements (“Temperature OpenFOAM”). Our results align closely with our expectations, particularly in the context of the two simulations, namely “motorBike” and “rotorDisk.” In terms of average power consumption, as illustrated in Fig.  7 a, the Nezha platform demonstrated the lowest power usage, followed by the Odroid. At the same time, the Rock960 registered the highest power consumption.

Moreover, in terms of execution time, as depicted in Fig.  7 b, the Nezha D1 exhibited the longest duration, with the Odroid XU4 ranking as the second slowest and the Rock960 showcasing the fastest execution times. Interestingly, Nezha D1 emerged as the highest consumer in energy consumption, closely followed by Odroid, although the disparity was not as significant as observed in previous benchmark tests.

Regarding energy usage, as presented in Fig.  7 c, it is worth noting that Nezha D1 displayed the highest energy consumption, followed by Odroid. However, their difference was not as pronounced as in our earlier benchmarking experiments. Finally, with respect to the temperature measurements featured in Fig.  7 d, consistent with our previous experiments, the Nezha platform maintained the lowest temperature readings. At the same time, the Odroid recorded the highest temperatures, with the Rock960 falling in between.

These findings collectively yield valuable insights into the performance and energy efficiency of these hardware platforms in the real world, strengthening the trends observed in our benchmarking exercises. They demonstrate the adaptability of these applications to less powerful machines, revealing their ability to perform effectively, even if they do not achieve peak execution speed. In particular, its successful implementation on relatively recent architectures such as RISC-V underscores their versatility and potential for deployment across diverse computing environments. This exploration sheds light on the resilience of these applications and opens exciting avenues for harnessing their capabilities in a broader spectrum of computing systems.

figure 7

OpenFOAM metrics

6 Conclusion

In conclusion, our benchmark results have consistently reflected the anticipated characteristics of each hardware platform. Both the Odroid XU4 and the Rock960 have demonstrated superior performance in the NAS and TFLite tests compared to the Nezha D1. However, it is essential to acknowledge that this improved performance also comes at the cost of higher average power consumption compared to the Nezha D1.

Interestingly, our analysis suggests that architectural differences did not play a dominant role in determining the outcomes. Instead, the variations in performance and power consumption are predominantly attributed to the unique features of each device rather than the underlying architecture. Notably, RISC-V architecture implementations are still in a relatively nascent stage compared to their more established counterparts. As RISC-V continues to evolve and undergo optimization, further enhancements can be anticipated in both performance and energy efficiency in the future.

When selecting a device, it is crucial to consider the specific use case and environmental constraints. For scenarios where power consumption is a critical concern, such as in environments with limited power supply, the Nezha D1 emerges as a more favorable choice. On the contrary, when prioritizing performance or energy efficiency in contexts where power constraints are not an issue, the Odroid XU4 and Rock960 devices represent more suitable alternatives.

Furthermore, our investigation revealed that the inclusion of the Xnnpack delegate did not exert a substantial influence on our benchmark results. In some instances, its usage even yielded lower results compared to the baseline, indicating that the impact of this delegate may vary depending on the specific application and hardware configuration. Thus, careful consideration of the delegate’s utility should be exercised when incorporating it into similar benchmarking and computational tasks.

In the context of practical applications and simulations, our OpenFOAM case study closely echoed our benchmark findings, underscoring the valuable insights these benchmarks offer when assessing hardware performance and energy efficiency. This synergy between synthetic benchmarks and real-world tasks provides a comprehensive perspective for developers and researchers looking to make informed decisions about the selection of hardware for their specific computational needs. Furthermore, our study contributes to the expansion of knowledge on the utilization of CFD tools like OpenFOAM, particularly on SoCs that use ARM and RISC-V architectures, which have been relatively less explored in the existing research landscape.

Data availability

The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.

Meuer H, Strohmaier E, Dongarra J, Simon H (2023) TOP500 June 2023 List. http://www.top500.org/

Waterman A, Lee Y, Patterson DA, Asanovi K (2014) The risc-v instruction set manual. volume 1: user-level isa, version 2.0. Technical report, Berkeley University. https://doi.org/10.21236/ADA605735 . https://www2.eecs.berkeley.edu/Pubs/TechRpts/2016/EECS-2016-118.pdf

Limited A Arm Architecture Reference Manual for A-profile architecture. https://developer.arm.com/documentation/ddi0487/latest/

Jasak H (2009) Openfoam: open source cfd in research and industry. Int J Naval Archit Ocean Eng 1(2):89–94. https://doi.org/10.2478/IJNAOE-2013-0011

Article   Google Scholar  

Simakov NA, DeLeon RL, White JP, Jones MD, Furlani TR, Siegmann E, Harrison RJ (2023) Are we ready for broader adoption of ARM in the HPC community: Performance and energy efficiency analysis of benchmarks and applications executed on high-end ARM systems. In: Proceedings of the HPC Asia 2023 Workshops, HPC Asia 2023, Singapore, 27 February 2023–2 March 2023, pp 78–86. ACM. https://doi.org/10.1145/3581576.3581618

Luszczek PR, Bailey DH, Dongarra JJ, Kepner J, Lucas RF, Rabenseifner R, Takahashi D (2006) The hpc challenge (hpcc) benchmark suite. In: Proceedings of the 2006 ACM/IEEE Conference on Supercomputing. SC ’06, p. 213. Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/1188455.1188677

Aprà E, Bylaska EJ (2020) NWChem: past, present, and future. J Chem Phys 152(18):184102 https://pubs.aip.org/aip/jcp/article-pdf/doi/10.1063/5.0004997/16684361/184102_1_online.pdf . https://doi.org/10.1063/5.0004997

Abraham MJ, Murtola T, Schulz R, Páll S, Smith JC, Hess B, Lindahl E (2015) Gromacs: high performance molecular simulations through multi-level parallelism from laptops to supercomputers. SoftwareX 1–2:19–25. https://doi.org/10.1016/j.softx.2015.06.001

Article   ADS   Google Scholar  

Ignatov A. AI Benchmark Alpha. https://ai-benchmark.com/alpha.html

Bryan GL, Norman ML, O’Shea BW, Abel T, Wise JH, Turk MJ, Reynolds DR, Collins DC, Wang P, Skillman SW, Smith B, Harkness RP, Bordner J, Kim J-H, Kuhlen M, Xu H, Goldbaum N, Hummels C, Kritsuk AG, Tasker E, Skory S, Simpson CM, Hahn O, Oishi JS, So GC, Zhao F, Cen R, YL (2014) ENZO: an adaptive mesh refinement code for astrophysics. Astrophys J Suppl Ser 211(2):19. https://doi.org/10.1088/0067-0049/211/2/19

Gupta K, Sharma T (2021) Changing trends in computer architecture : a comprehensive analysis of arm and \(\times \) 86 processors. Int J Sci Res Comput Sci Eng Inf Technol, pp 619–631. https://doi.org/10.32628/CSEIT2173188

Zaruba F, Benini L (2019) The cost of application-class processing: energy and performance analysis of a linux-ready 1.7-ghz 64-bit risc-v core in 22-nm fdsoi technology. IEEE Trans Very Large Scale Integr Syst 27(11):2629–2640. https://doi.org/10.1109/TVLSI.2019.2926114

Group, O Ariane (cva6) public repository. https://github.com/openhwgroup/cva6

Elsadek I, Tawfik EY (2021) Risc-v resource-constrained cores: a survey and energy comparison. In: 2021 19th IEEE International New Circuits and Systems Conference (NEWCAS), pp 1–5. https://doi.org/10.1109/NEWCAS50681.2021.9462781

YosysHQ: PicoRV32 public repository. https://github.com/YosysHQ/picorv32

Dörflinger A, Albers M, Kleinbeck B, Guan Y, Michalik H, Klink R, Blochwitz C, Nechi A, Berekovic M (2021) A comparative survey of open-source application-class risc-v processor implementations. In: Proceedings of the 18th ACM International Conference on Computing Frontiers. CF ’21. Association for Computing Machinery, New York, NY, USA, pp 12–20. https://doi.org/10.1145/3457388.3458657

Asanović K. Avizienis R, Bachrach J, Beamer S, Biancolin D, Celio C, Cook H, Dabbelt D, Hauser J, Izraelevitz A, Karandikar S, Keller B, Kim D, Koenig J, Lee Y, Love E, Maas M, Magyar A, Mao H, Moreto M, Ou A, Patterson DA, Richards B, Schmidt C, Twigg S, Vo H, Waterman A (2016) The rocket chip generator. Technical Report UCB/EECS-2016-17, EECS Department, University of California, Berkeley. http://www2.eecs.berkeley.edu/Pubs/TechRpts/2016/EECS-2016-17.html

Division NAS Nas Parallel Benchmark reference. https://www.nas.nasa.gov/software/npb.html

Team GB. Tflite Benchmark reference. https://www.tensorflow.org/lite/performance/measurement

Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H (2017) Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861

Google: Xnnpack public repository. https://github.com/google/XNNPACK

Weller H, Jasak H. OpenFOAM v1906. https://www.openfoam.com/news/main-news/openfoam-v1906

Weller H, Jasak H. OpenFOAM motorbike example. https://develop.openfoam.com/Development/openfoam/-/tree/master/tutorials/incompressible/simpleFoam/motorBike

Weller H, Jasak H. OpenFOAM rotor disk example. https://develop.openfoam.com/Development/openfoam/-/tree/master/tutorials/incompressible/simpleFoam/rotorDisk

Group of Architecture and Technology of Computing Systems (ArTeCS) of the Complutense University of Madrid, T.: AccelPowerCape reference. https://artecs.dacya.ucm.es/tools/accelpowercape/

Coley G. Beaglebone black system reference manual. https://www.farnell.com/datasheets/1685587.pdf

Llamas C, Ottogalli K, Hernández C, González M, Vegas J (2015) Sistema móvil basado en open source hardware para la adquisición de datos de movimiento humano

Adafruit: INA219 public repository. https://github.com/adafruit/Adafruit_INA219

Barreda M, Barrachina Mir S, Catalán S, Dolz MF, Fabregat G, Mayo R, Orti E (2013) An integrated framework for power-performance analysis of parallel scientific applications

Cabrera A, Almeida F, Arteaga J, Blanco V (2014) Measuring energy consumption using eml (energy measurement library). Comput Sci - Res Dev 30. https://doi.org/10.1007/s00450-014-0269-5

Download references

Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. This work has been supported by the Spanish Ministry of Science and Innovation with projects PID2019-107228RB-I00, TED2021-131019B-I00, and PDC2022-134013-I00; and the Spanish network CAPAP-H.

Author information

Daniel Suarez and Francisco Almeida have contributed equally to this work.

Authors and Affiliations

Computer Science and Systems Department, Universidad de La Laguna (ULL), San Francisco de Paula s/n, 38270, La Laguna, Spain

Daniel Suárez, Francisco Almeida & Vicente Blanco

You can also search for this author in PubMed   Google Scholar

Contributions

These authors contributed equally to this work.

Corresponding author

Correspondence to Vicente Blanco .

Ethics declarations

Conflict of interest.

The authors have no conflicts of interest to declare that are relevant to the content of this article.

Ethical approval

Not applicable.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Suárez, D., Almeida, F. & Blanco, V. Comprehensive analysis of energy efficiency and performance of ARM and RISC-V SoCs. J Supercomput (2024). https://doi.org/10.1007/s11227-024-05946-9

Download citation

Accepted : 28 January 2024

Published : 20 February 2024

DOI : https://doi.org/10.1007/s11227-024-05946-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Performance
  • Energy consumption

Advertisement

  • Find a journal
  • Publish with us
  • Track your research

Watch CBS News

Judge in Trump's New York fraud case orders him to pay $354 million in penalties, plus millions more in interest

By Graham Kates, Melissa Quinn

Edited By Stefan Becket, Paula Cohen

Updated on: February 16, 2024 / 8:37 PM EST / CBS News

Former President Donald Trump and the Trump Organization must pay $354 million in fines — a total that jumps to $453.5 million when pre-judgment interest is factored in — a judge ruled Friday in their New York  civil fraud case . The long-awaited ruling also bars them from seeking loans from financial institutions in New York for a period of three years, and includes a three-year ban on Trump serving as an officer or director of any New York corporation.

Judge Arthur Engoron handed down his judgment in a 92-page decision on Friday. The ruling is one of the largest corporate sanctions in New York history. Trump has vowed to appeal. 

The judge's ruling also blocks Allen Weisselberg , the former chief financial officer of the Trump Organization, and Jeffrey McConney, former corporate controller, from serving as an officer or director of any New York corporation or other legal entity in the state for three years, and permanently bans them from serving in the "financial control function" of any New York corporation.

"The evidence is overwhelming that Allen Weisselberg and Jeffrey McConney cannot be entrusted with controlling the finances of any business," Engoron's order states.

In addition to imposing limits on Trump's business activities, the order bans his two oldest sons, Eric Trump and Donald Trump, Jr., from serving as an officer or director of any New York corporation or legal entity for two years.

The two, who serve as executive vice presidents at the Trump Organization, must also pay more than $4 million apiece, including interest. Weisselberg is ordered to pay a $1 million penalty. 

Speaking outside Mar-a-Lago after the ruling, Trump called the judge "crooked" and insisted the case was a "witch hunt."

"We will get back to work. It's a ridiculous award — a fine of $355 million for doing a perfect job," Trump said.

"Complete lack of contrition"

Engoron issued a scathing rebuke of Trump, his two adult sons, Weisselberg and McConney in his decision, writing that they refused to admit error even after four years of investigation and litigation.

"Their complete lack of contrition and remorse borders on pathological," he wrote. "They are accused only of inflating asset values to make more money. The documents prove this over and over again."

The judge determined that Trump, top officials at the Trump Organization, and his companies submitted "blatantly false financial data" to accountants in order to borrow more money at more favorable interest rates.

"When confronted at trial with the statements, defendants' fact and expert witnesses simply denied reality, and defendants failed to accept responsibility or to impose internal controls to prevent future recurrences," Engoron wrote.

He said the frauds in the case "leap off the page and shock the conscience."

New York Attorney General Letitia James cheered the decision as a victory for the state, the country and those who believe in an even playing field.

"There simply cannot be different rules for different people," she said in a statement. "Now, Donald Trump is finally facing accountability for his lying, cheating, and staggering fraud. Because no matter how big, rich, or powerful you think you are, no one is above the law."

Alina Habba, one of Trump's attorneys who also serves as his spokeswoman, denounced the decision and confirmed the former president will appeal Engoron's judgment.

"This verdict is a manifest injustice — plain and simple. It is the culmination of a multi-year, politically fueled witch hunt that was designed to 'take down Donald Trump,' before Letitia James ever stepped foot into the Attorney General's office," she said in a statement. "Countless hours of testimony proved that there was no wrongdoing, no crime, and no victim."

She continued: "Let me make one thing perfectly clear: this is not just about Donald Trump — if this decision stands, it will serve as a signal to every single American that New York is no longer open for business."

A spokesperson for the Trump Organization also defended the company's financial dealings, calling the ruling a "gross miscarriage of justice."

"If the Attorney General is permitted to retroactively insert herself into private commercial transactions between sophisticated parties, no business transaction entered into in the State of New York will be beyond the attorney general's purview," the spokesperson said. "Every member of the New York business community, no matter the industry, should be gravely concerned with this gross overreach and brazen attempt by the attorney general to exert limitless power where no private or public harm has been established."

James brought the civil suit in 2022, asking the judge to bar Trump from doing business in the state and seeking a penalty of $250 million, a figure her office increased to $370 million by the end of the trial. 

Trump and his legal team long expected a defeat, with the former president decrying the case as "rigged" and a "sham" and his lawyers laying the groundwork for an appeal before the judgment was even issued. 

Even before Friday's ruling, the judge had largely affirmed James' allegations that Trump and others at his company inflated valuations of his properties by hundreds of millions of dollars over the course of a decade, and misrepresented his wealth by billions. The scheme , the state said, was meant to trick banks and insurers into offering more favorable deal terms.

Engoron ruled in September that Trump and the other defendants were liable for fraud , based on the evidence presented through pretrial filings.

The trial, which began in October  and wrapped up in January , focused on other aspects of the lawsuit related to alleged falsification of business records, issuing false financial statements, insurance fraud and conspiracy.

The financial penalty James sought, known as disgorgement, is meant to claw back the amount Trump and his company benefited from the scheme. (Under New York law, disgorgement cases are decided by a judge, not a jury .) 

Ivanka Trump, the former president's daughter and once an executive at the Trump Organization, was originally named as a defendant in the suit, but an appellate court later dismissed allegations against her, citing the state's statute of limitations.

What were the Trumps accused of?

The lawsuit laid out seven causes of actions — the claims of illegal conduct that James' office said entitled the state to claw back ill-gotten profits and warranted severe sanctions against the defendants:

  • Persistent and Repeated Fraud
  • Falsifying Business Records
  • Conspiracy to Falsify Business Records
  • Issuing False Financial Statements
  • Conspiracy to Falsify False Financial Statements
  • Insurance Fraud
  • Conspiracy to Commit Insurance Fraud

The claims revolve around financial statements given by Trump and his company to banks and insurers.  The statements were prepared by accounting firms using spreadsheets of underlying data that included vast inflations of Trump property valuations.

The defendants lost on the first claim, persistent and repeated fraud, before the trial even started.

While Trump can appeal, the judgment will take a toll on his finances in the process.

"Trump may have a shot at reducing the damages on appeal, but to appeal he has to post an appeal bond of $350 million in this case and $83 million in E. Jean Carroll's case . That will be costly," John Coffee, a Columbia University law professor and an expert on corporate governance and white collar crime, told CBS News .

The Sept. 26 fraud ruling

Engoron agreed in September with James' office that it was beyond dispute, based on evidence presented through pretrial filings, that Trump and his company provided banks with financial statements that misrepresented his wealth by billions.

"The documents here clearly contain fraudulent valuations that defendants used in business," Engoron wrote in the Sept. 26 ruling.

Engoron found as fact in that ruling that Trump and the company overstated the valuations of many properties by hundreds of millions of dollars. He cited the Palm Beach Assessor valuation of Trump's Mar-a-Lago club at between $18 million and $28 million for each year between 2011 and 2021 — the values for which he paid local property taxes. During those years, Trump valued the property at between $328 million and $714 million on his annual statements of financial conditions.

Trump seized on the Mar-a-Lago valuation, complaining about it frequently during public appearances, in social media posts, and in his own defense at trial.

Trump's testimony at the trial

Donald Trump and three of his children testified during the trial, which began on Oct. 2 and ran for more than three months. 

Ivanka Trump and her brothers said they couldn't recall many of the interactions at the center of the case, including deliberations related to efforts to secure financing and insurance for Trump property developments. Eric and Donald Trump Jr. both sought to pin blame on the company's accountants, claiming they had little involvement in the preparation of financial statements that misrepresented the values of company properties.

But Engoron determined that there was "sufficient evidence" that Eric and Donald Trump Jr. "intentionally falsified business records." He found that Eric Trump "intentionally" gave McConney "knowingly false and inflated valuations" for the Seven Springs estate, a Trump-owned property in Westchester County, New York.

The former president took the stand on Nov. 6 , stopping to address the media on his way into court. "It's a very sad situation for our country," he said.

Under oath, he gave long-winded answers, seeming to test the judge's patience. At one point Engoron addressed Trump's lawyers, saying, "We got another speech," and urging them to "control him if you can."

As questioning continued, Trump defended the valuations of various Trump Organization properties said the company's statements of financial condition included a disclaimer that absolved him of responsibility for inaccuracies.

Engoron's order criticized Trump for failing to answer many questions, which the judge said damaged his credibility.

"Overall, Donald Trump rarely responded to the questions asked, and he frequently interjected long, irrelevant speeches on issues far beyond the scope of the trial. His refusal to answer the questions directly, or in some cases, at all, severely compromised his credibility."

Lawyers for the Trumps argued that the financial statements were accurate and well done, and also that valuations are subjective. They said that documents James' lawyers called evidence of fraud were actually evidence of Trump's "genius." Any misrepresentations or breaks with accepted accounting practices were his accountants fault, they said.

The former president himself also blamed his accountants, but maintained that his financial statements actually undervalued his properties and net worth.

"I'm worth more than the numbers in the statement," Trump said.

–CBS News' Jake Rosen and Aimee Picchi contributed reporting.

  • The Trump Organization
  • Donald Trump
  • Letitia James

Graham Kates is an investigative reporter covering criminal justice, privacy issues and information security for CBS News Digital. Contact Graham at [email protected] or [email protected]

More from CBS News

Trump fraud ruling adds to his string of legal losses in New York

Trump hopes to reshape RNC into "seamless operation" with leadership changes

Sen. Tim Scott: Voters "more focused on their future than Donald Trump's past"

Biden raised $42 million in January, his campaign says

IMAGES

  1. What Is Benchmarking And Why It Matters In Business

    case study benchmark analysis

  2. Business Case Analysis: Definition, Format & Examples of a Case Study

    case study benchmark analysis

  3. case analysis of data

    case study benchmark analysis

  4. UX Case Study Template and Examples

    case study benchmark analysis

  5. What is Benchmarking? Types and Limitations of using it

    case study benchmark analysis

  6. Case study: FT Index benchmarking & best practice

    case study benchmark analysis

VIDEO

  1. Numerical Analysis

  2. smart board ka opening 😁 #india #viral KL Study benchmark panel

  3. Study Tour.. #viral #benchmark #foryou #school #trending

  4. smart board benchmark #shorts #trending

  5. C# Performance Optimization : Unordered Remove Benchmark #csharp #programming #dotnet

  6. Journal Club Podcast: Global Outcomes for Microsurgical Clipping of Unruptured Intracranial Aneurysm

COMMENTS

  1. What is benchmarking, and how to do a competitive analysis

    Benchmarking is the process of determining the best processes, strategies, and techniques for achieving your business goals. That sounds simple enough, right? You're just trying to create the best business possible. The problem is that you sometimes need outside data to measure success.

  2. Understanding Benchmarking Analysis: A Step-by-Step Guide

    Step 1: Identify Areas for Benchmarking To kickstart the benchmarking process, the first step is to identify the specific areas or processes in your organization that you want to benchmark against industry standards or top performers.

  3. Competitive Benchmarking: Best Practice Guide

    Competitive benchmarking: Best practice guide // March 28, 2022 // 11min read What is competitive benchmarking, and how can you use it to get ahead of your competition? Read on to learn how to create KPIs that effectively chart success and the best practices for developing a competitive benchmarking strategy. What is competitive benchmarking?

  4. What is Benchmarking? Technical & Competitive Benchmarking Process

    Benchmarking on ASQTV. Benchmarking is defined as the process of measuring products, services, and processes against those of organizations known to be leaders in one or more aspects of their operations. Benchmarking provides necessary insights to help you understand how your organization compares with similar organizations, even if they are in ...

  5. Benchmark Reporting Guide: Prepare, Analyze & Present Data

    A benchmark report is a type of business report that allows you to see how your product, performance, or company compares to similar products or companies. "A benchmark report is a top way of debunking what's the best performance being acquired in a particular organization or by a diverse industry," explains Eden Cheng of PeopleFinderFree.

  6. Competitive Benchmarking: What It is and How to Do It

    The aim? To study the strategies and practices competitors use and get a comparative overview of how well you're doing in the market. The key to successful competitive benchmarking, however, is to stay in charge of the process by pre-defining competitors to analyze. Typically, folks get carried away because they study one too many competitors.

  7. How to use competitive benchmarking for market research

    Competitive analysis is also sometimes described as competitor research. The purpose of this type of research is to identify each of your competitors, and to perform an in-depth assessment of their strengths and weaknesses. You can get an understanding of their strategies and tactics in order to obtain insight into their day-to-day operations.

  8. 8 Steps of the Benchmarking Process

    8 Steps of the benchmarking process Reading time: about 7 min Topics: Process improvement Businesses are always striving for high performance, from creating more efficient processes to selling more of their products and services. But how does a company determine whether it is successful?

  9. How to use benchmarking to set your standards for success

    Summary How do you know when your work is successful? Benchmarking is a data-driven process that helps you create your own standards to measure success. Setting benchmarks is a simple way to set clear expectations for your team. In this article, learn the different types of benchmarking and the steps to create your own benchmarks.

  10. (PDF) Benchmarking process formalization and a case study

    Benchmarking process formalization and a case study Benchmarking for Quality Management & Technology 5 (2):101-125 DOI: 10.1108/14635779810212356 Authors: Gülçin Büyüközkan Jean Luc Maire...

  11. Smart Benchmarking Starts with Knowing Whom to Compare Yourself To

    Comparing your organization to peers - also known as benchmarking - lets you understand how you're doing, identify performance gaps and opportunities to improve, and highlight peer achievements...

  12. Improving project system performance through benchmarking

    Griffith, A. F. (2006). Improving project system performance through benchmarking. Paper presented at PMI® Global Congress 2006—EMEA, Madrid, Spain. Newtown Square, PA: Project Management Institute. Introduction Benchmarking is two things: setting goals by using objective, external standards and learning from others (Boxwell, 1994).

  13. Case Study: Best Practice and Benchmark Analysis

    Case Studies; Case Study: Best Practice and Benchmark Analysis; Situation: One of the world's leading academic and research institutions had established a reputation as a credible contributor to the study of gender in the workplace. They also recognized that today's global and multicultural environment was increasing the need for graduates ...

  14. Case studies of successful benchmarking and diagnostic ...

    Some case studies of successful benchmarking and diagnostic testing in SMEs: 1. One company that successfully used benchmarking to improve its performance is a small manufacturing business that ...

  15. Developing Growth Strategies Leveraging Market Analysis and Competitive

    In this market analysis and competitive benchmarking case study, we outline how Clarkston helped a spirits client develop its growth strategy. The whiskey category is highly competitive, with several major players owning a large portion of market share and new entrants continuing to disrupt the industry. The brand was seeking to understand what ...

  16. Benchmarking, Case Study and Examples

    JULY 5, 2023 Here I aim to shed light on what pay transparency looks like at Compt, explain its mechanics and influence on overall compensation structures and raises, present real-world examples of its benefits, and provide practical considerations for organizations contemplating this approach.

  17. Multi-criteria analysis of measures in benchmarking: Dependability

    The application of the quality model in the analysis process will be later illustrated in Section 4 through different case studies. 3.1. Benchmark user and target system. ... This section shows the feasibility of our multi-criteria analysis methodology along three case studies in the domain of distributed systems, such as web servers, on-line ...

  18. Benchmarking guide

    Identifying potential benchmarking partners (those companies you would like to go and see) Step 3 Analysis. Establish current levels of performance. Map the existing process. Benchmark, by mapping competitor processes and comparing performance. Identify the potential gap. Step 4 Integration.

  19. Business Analysis: Key Definitions & Strategy Analysis

    This module delves into Strategy Analysis, a critical aspect of business analysis. Students will explore techniques such as analyzing the current state, using tools like the Business Model Canvas, SWOT analysis, and Business Process Analysis. They will also learn about bench marking, document analysis, and risk assessment.

  20. Benchmarking & Best Practice Strategy

    Benchmarking Study. Compares your organization's results to nationwide statistics or relevant peer or public data sets to allow you to calibrate your organization's performance on key metrics, offerings, and other important attributes. "The Australian National University is working to ensure it offers the best accommodation to its students.

  21. Conducting a UX benchmarking study step by step

    Step 1: Identify behavioral and attitudinal issues using a UX audit Step 2: Define standards you will look up to Step 3: Analyse comparative results and propose solutions Step 4: Implement changes Step 5: Gather post-change results on the UX metrics of interest Step 6: Visualize and report the findings Step 7: Record lessons learned

  22. Sustaining the collaborative chronic care model in outpatient mental

    Sustaining evidence-based practices (EBPs) is crucial to ensuring care quality and addressing health disparities. Approaches to identifying factors related to sustainability are critically needed. One such approach is Matrixed Multiple Case Study (MMCS), which identifies factors and their combinations that influence implementation. We applied MMCS to identify factors related to the ...

  23. Case Study: Benchmarking Tools

    Abstract. Benchmark tools are useful to provide some "standard" performance indexes for storage systems with specific requirements. This chapter shows how to identify the access pattern of benchmark results. The first tool is SPC-1C from the Storage Performance Council (SPC). After capturing the pattern, I developed a synthetic emulator to ...

  24. Sustainable Supply Chain Practices in the Oil and Gas Industry: A Case

    Sustainability reporting within the oil and gas (O&G) industry started back in the 1990s and has improved longitudinally since then. However, when reporting their sustainability-related practices and initiatives, O&G companies seldomly mention the term green supply chain management (GSCM). The study aims to investigate the development of GSCM practices in the O&G sector and to categorize how ...

  25. Benchmark GIM Case Analysis Paper (docx)

    Sociology document from Liberty University, 14 pages, 1 Benchmark GIM Case Analysis Paper The Sutter Family Case Analysis Paper Cottinda S. Bell Ethelyn R. Armstrong School of Social Work, Norfolk State University SWK 313-Generalist Practice: Individuals and Families DR. Gardenella Green December 1, 2023 2

  26. Comprehensive analysis of energy efficiency and performance ...

    In recent years, there has been a growing interest in the research and analysis of energy efficiency and performance in ARM-based systems. Studies such as the one conducted by Simakov and Deleon [] have provided valuable insights into the current state of energy efficiency in ARM architectures.In their study, they presented a comprehensive analysis of performance and energy efficiency using ...

  27. BUS-660-Data Analysis Case Study.docx

    BENCHMARK: DATA ANALYSIS CASE STUDY The average number of customers in the waiting line is 0.2979. The average number of customers in the system is 0.7593. The average time a customer waits until the service technician arrives is 1.6082 hours. The average time a customer waits until the machine is back in operation is 4.1082 hours. The probability that a customer will have to wait more than ...

  28. What Makes A Great User-Generated Content Creator Portfolio?

    Unfortunately, without case studies, there's no way to know if these ads offered a return on ad spend. If these three creators are trying to get the same job, the Thoughtful Strategist has the ...

  29. Judge in Trump's New York fraud case orders him to pay $354 million in

    Trump fined $354M in civil fraud case, cannot do business in New York for 3 years 33:42. Former President Donald Trump and the Trump Organization must pay $354 million in fines — a total that ...