• Distributors

Composite cylinders

Cylinder top

  • Low cylinder weight
  • No rust, no cleaning and painting
  • Explosion proof in fire
  • Flexible threads and valves customization
  • Extensive color options
  • Gas level control

High quality composite cylinders for LPG storage and transportation.

Cylinders

Configurator

1. Additional information

Can be placed here, for example owner of a cylinder or warnings required by laws and regulations.

2. Logotype

Any customer's logo or text can be placed here.

3. Shrink film sleeve compatible

Special design allows to fit shrink film perfectly.

Can be placed inside cylinder's handle.

5. DATAMATRIX

Readable from any cylinder's position, contains cylinder's unique nubmer.

6. Logo, trademark or special pattern

Max size 240x120 mm. Can be see-thorough for easy gas level control.

Cutouts

Manufacturing

Our production of composite cylinders is modern, flexible and capable to fulfill any customer requirements in the shortest possible time. Our technology was developed by our own R&D team with remarkable experience in the field of composite cylinders. Our team has developed complete production line including all machines and tools to manufacture cylinders with a consistent quality. Design of cylinders has been properly tested and found positive response from many customers. Our strong side is to have all aspects of product and technology under one roof and full control.

More details

Our R&D team is involved in improvement of existing products as well as in development of new projects. These projects include new pressure cylinders of various purposes and properties, new manufacturing and testing equipment, consultancy and troubleshooting of relevant productions. We are always open for new ideas and projects. Do not hesitate to contact us if you need strong partner in development of pressure vessels made of composite materials, special equipment or plastic parts. We do real research for real production.

hpc research

Susocar Spain

Our client, company Susocar AutoGas , takes an active part in various exhibitions. The last exhibition attracted a very large number of visitors. The most interesting were the composite cylinders with multivalve valves.

hpc research

Forklift Cylinders

Our client, Tomegas s.r.o. (Czech Republic) uses HPC Research composite cylinders for forklift trucks with clip-on system.  

hpc research

Caravan Cylinders

Our partner in UK,  Carbon Zorro  company is promoting our LPG composite cylinders together with their other related products under their new brand 

HPC Research

School of Computational Science and Engineering

College of computing.

High-Performance Computing

High Performance Computing

Research in high-performance computing (HPC) aims to design practical algorithms and software that run at the absolute limits of scale and speed. It is motivated by the incredible demands of “big and hairy” data-hungry computations, like modeling the earth’s atmosphere and climate, using machine learning methods to analyze every book and paper ever written, discovering new materials, understanding how biological cells work, simulating the dynamics of black holes, or designing city infrastructure to be more sustainable, to name just a few. To perform these computations requires new approaches for the efficient use of advanced computer systems, which might consist of millions of processor cores connected by high-speed networks and coupled with exabytes of storage. 

Specific HPC topic areas of interest within CSE include: 

  • Discrete and numerical parallel algorithms 
  • HPC applications in science and engineering 
  • Design of architecture-aware algorithms (e.g., GPU, FPGA, Arm) 
  • High-performance scientific software 
  • Performance analysis and engineering 
  • Post-Moore's Law computing 

HPC research at Georgia Tech is cross-cutting and multidisciplinary. CSE HPC faculty work closely with researchers across computing, social and natural sciences, and engineering domains. They lead interdisciplinary HPC research centers (see below) and contribute to HPC-driven domain specific research centers and institutes such as the Center for Relativistic Astrophysics (CRA) and the Institute for Materials (IMAT).

Related links:

Center for High Performance Computing (CHiPC)

Center for Research into Novel Computing Hierarchies (CRNCH)

Institute for Data Engineering and Science (IDEaS)

CSE Faculty  specializing in High-Performance Computing research:

Srinivas Aluru

Srinivas Aluru

Regents' Professor

Personal Webpage

Edmond Chow

Edmond Chow Block

Assistant Professor

Spencer Bryngelson

Spencer Bryngelson

Haesun Park

Haesun Park

Chair, Regents' Professor

Ümit Çatalyürek

Ümit Çatalyürek

Elizabeth Cherry

Elizabeth Cherry

Associate Professor

Rich Vuduc block

Illustration showing how high-power computing uses powerful processors to process massive amounts of data in real time

Published: 9 July 2024 Contributors: Stephanie Susnjara, Ian Smalley

HPC is a technology that uses clusters of powerful processors that work in parallel to process massive, multidimensional data sets and solve complex problems at extremely high speeds.

HPC solves some of today's most complex computing problems in real-time. HPC systems typically run at speeds more than one million times faster than the fastest commodity desktop, laptop or server systems.

Supercomputers , purpose-built computers that embody millions of processors or processor cores, have been vital in high-performance computing for decades. Unlike mainframes , supercomputers are much faster and can run billions of floating-point operations in one second. 

Supercomputers are still with us; the fastest supercomputer is the US-based Frontier, with a processing speed of 1.206 exaflops, or quintillion floating point operations per second (flops). 1 But today, more organizations are running HPC services on clusters of high-speed computer servers hosted on premises or in the cloud.

HPC workloads uncover new insights that advance human knowledge and create significant competitive advantages. For example, HPC sequences DNA and automates stock trading. It runs  artificial intelligence (AI) algorithms and simulations—like those enabling self-driving automobiles—that analyze terabytes of data streaming from  IoT sensors, radar and GPS systems in real-time to make split-second decisions.

There has been a decided shift in the tone of public discourse relative to utilization of HPC resources in the cloud over the last 12-24 months.

A standard computing system solves problems primarily by using serial computing. It divides the workload into a sequence of tasks and then runs the tasks one after the other on the same processor.

Parallel computing runs multiple tasks simultaneously on numerous computer servers or processors. HPC uses massively parallel computing, which uses tens of thousands to millions of processors or processor cores.

An HPC cluster comprises multiple high-speed computer servers networked with a centralized scheduler that manages the parallel computing workload. The computers, called nodes, use either high-performance multi-core CPUs or—more likely today— GPUs , which are well suited for rigorous mathematical calculations, machine learning (ML) models and graphics-intensive tasks. A single HPC cluster can include 100,000 or more nodes.

Linux is the most widely used operating system for running HPC clusters. Other operating systems include Windows, Ubuntu and Unix.

All the other computing resources in an HPC cluster—such as networking , memory, storage and file systems—are high speed and high throughput. They are also low- latency components that can keep pace with the nodes and optimize the computing power and performance of the cluster.

HPC workloads rely on a message passing interface (MPI), a standard library and protocol for parallel computer programming that allows users to communicate between nodes in a cluster or across a network.

High-performance computing (HPC) relies on conventional bits and processors used in classical computing. In contrast, quantum computing uses specialized technology-based quantum mechanics to solve complex problems. Quantum algorithms create multidimensional computational spaces that are a much more efficient way of solving complex problems—like simulating how molecules behave—that classic computers or supercomputers can't solve quickly enough. Quantum computing is not expected to replace HPC anytime soon. Rather, the two technologies can be combined to achieve efficiency and optimal performance.

As recently as a decade ago, the high cost of HPC, which involved owning or leasing a supercomputer or building and hosting an HPC cluster in an on-premises data center , put it out of reach for most organizations.

Today, HPC in the cloud—sometimes called HPC as a service or HPCaaS—offers a significantly faster, more scalable and more affordable way for companies to take advantage of HPC. HPCaaS typically includes access to HPC clusters and infrastructure hosted in a cloud service provider's data center, network capabilities (such as AI and data analytics) and HPC expertise.

Today, three converging trends drive HPC in the cloud.

Organizations across all industries increasingly depend on the real-time insights and competitive advantage of using HPC applications to solve complex problems. For example, credit card fraud detection—something we all rely on, and most have experienced at one time or another—relies increasingly on HPC to identify fraud faster and reduce annoying false positives, even as fraud activity expands and fraudsters' tactics change constantly.

Since the launch of technologies like ChatGPT, organizations have rapidly embraced the promise of generative AI (gen AI) to accelerate innovation and foster growth. This development has spurred an even greater demand for high-performance computing. HPC provides the high computational power and scalability to support large-scale AI-driven workloads. In a report from Intersect 360 Research, the total worldwide market for scalable computing infrastructure for HPC and AI was USD 85.7 billion in 2023, up 62.4% year-over-year, due predominantly to a near tripling of spending by hyperscale companies on their AI infrastructure. 2

Remote direct memory access (RDMA) enables one networked computer to access another networked computer's memory without involving either computer's operating system or interrupting either computer's processing. This helps minimize latency and maximize throughput, reducing memory bandwidth bottlenecks. Emerging high-performance RDMA fabrics—including InfiniBand, virtual interface architecture and RDMA over converged Ethernet—make cloud-based HPC possible.

Today, every leading public cloud service provider, including Amazon Web Services (AWS), Microsoft Azure, Google Cloud and IBM Cloud®, offers HPC services. While some organizations continue to run highly regulated or sensitive HPC workloads on-premises, many are adopting or migrating to private-cloud HPC services provided by hardware and solution vendors.

HPC in the cloud allows organizations to apply many compute assets to solve complex problems and provides the following benefits:

  • Quickly configure and deploy intensive workloads.
  • Reduce time to results through scaling with on-demand capacity.
  • Gain cost-efficiency by harnessing technology to meet your needs and pay only for the compute power you use.
  • Use cloud provider management tools and support to architect your specific HPC workloads.

HPC applications have become synonymous with AI, particularly machine learning (ML) and deep learning apps. Today, most HPC systems are designed with these workloads in mind.

From data analysis to cutting-edge research, HPC is driving continuous innovation in use cases across the following industries:

The first attempt to sequence a human genome took 13 years; today, HPC systems can do the job in less than a day. Other HPC applications in healthcare and life sciences include medical record management, drug discovery and design, rapid cancer diagnosis and molecular modeling. HPC visualization helps scientists gather insights from simulations and quickly analyze data.

HPC clusters provide the high-speed required to stream live events, render 3D graphics and special effects and reduce production time and costs. It can also help media companies gain data-driven insights to achieve better content creation and distribution.

In addition to automated trading and fraud detection, HPC powers applications in Monte Carlo simulation and other risk analysis methods.

Two growing HPC use cases in this area are weather forecasting and climate modeling, both of which involve processing vast amounts of historical meteorological data and millions of daily changes in climate-related data points. Other government and defense applications include energy research and intelligence work.

In cases that sometimes overlap with government and defense, energy-related HPC applications include seismic data processing, reservoir simulation and modeling, geospatial analytics, wind simulation and terrain mapping.

The automotive industry uses HPC to simulate and optimize the design of products and processes. For instance, HPC can run computational fluid dynamics (CFD) applications, which analyze and solve challenges related to fluid flows. This includes simulating aerodynamics to reduce air drag and friction and enabling battery simulation to optimize battery performance and safety. 

HPC can analyze large amounts of data to identify patterns to help prevent cyberattacks or other security threats.

Tackle large-scale, compute-intensive challenges and speed time to insight with hybrid cloud HPC solutions.

The IBM Spectrum® LSF Suites portfolio redefines cluster virtualization and workload management by providing an integrated system for mission-critical HPC environments.

IBM Power Virtual Server is a family of configurable multitenant virtual  IBM Power servers  with access to IBM Cloud® services.

IBM Spectrum Symphony software delivers enterprise-class management by using open-source Terraform-based automation to run compute- and data-intensive distributed applications on a scalable, shared grid.

Quantum computing is on the verge of sparking a paradigm shift. Software that is reliant on this nascent technology, rooted in nature's physical laws, could soon revolutionize computing forever.

Supercomputing is a form of high-performance computing that determines or calculates using a powerful computer, reducing overall time to solution.

The convergence of AI and HPC can generate intelligence that adds value and accelerates results, assisting organizations in maintaining competitiveness.

With HPC, enterprises are taking advantage of the speed and performance that comes with powerful computers working together.

Quantum computing uses specialized technology, including computer hardware and algorithms that take advantage of quantum mechanics, to solve complex problems that classical computers or supercomputers can’t solve, or can’t solve quickly enough.

Parallel computing, also known as parallel programming, is a process where large compute problems are broken down into smaller problems that can be solved simultaneously by multiple processors.

IBM offers a complete portfolio of integrated high-performance computing (HPC) solutions for hybrid cloud, which gives you the flexibility to manage compute-intensive workloads on premises or in the cloud. 

All links reside outside ibm.com

1  Frontier keeps top spot, but Aurora officially becomes the second exascale machine , top500, May 2024

2  Intersect360 Research Sizes Worldwide HPC-AI Market at $85.7B , HPC Wire, April 2024

Energy.gov Home

  • Initiatives
  • Advanced and Sustainable Energy
  • Artificial Intelligence and Machine Learning
  • High Performance Computing
  • Large Scale Scientific Instrumentation
  • Quantum Information Science

In recent decades, there has been substantial growth in the use of modeling, simulation, and artificial intelligence by scientists and engineers to devise solutions for complex problems and to push innovation forward. High-performance computing (HPC) – the most powerful and largest scale computing systems -- enables researchers to study systems that would otherwise be impractical, or impossible, to investigate in the real world due to their complexity or the danger they pose.

For over half a century, America has led the world in HPC, thanks to sustained federal government investments in research and the development and regular deployment of new systems, as well as strong partnerships with U.S. computing vendors and researchers. Within the federal government, the Department of Energy (DOE) leads the effort of pushing the boundary of what is possible with the nation’s fastest and most capable supercomputers housed at the DOE National Laboratories.

The newly developed Frontier Supercomputer at Oak Ridge National Laboratory (ORNL) debuted in 2022 as the Nation’s first exascale computer – able to compute over one quintillion (1,000,000,000,000,000,000) floating point operations per second. The DOE owns four of the top ten fastest supercomputers in the world according to the June 2023 Top500 list of the world’s most powerful supercomputers: Frontier ranks at number one; Summit, also at ORNL, at five; Sierra at Lawrence Livermore National Laboratory at six; and Perlmutter at Lawrence Berkeley National Laboratory at eight. 

It is not just hardware that defines American leadership. U.S. leadership also depends critically on major ongoing efforts in key fields of research, including applied mathematics and computer science, advanced lithography, and nanoscale materials science—areas where DOE Office of Science research programs excel. The strong synergy between hardware development, on the one hand, and software and application development, on the other, has been a defining strength of the U.S. approach—and has transformed HPC over the years into an ever more capable tool of both science and industry.

To support researchers in the transition to the age of exascale computing, the Advanced Scientific Computing Research program within the DOE Office of Science partnered with the Advanced Simulation and Computing program within the National Nuclear Security Administration to devise and implement the Exascale Computing Project (ECP). Supporting over 1,000 researchers across the nation, ECP is focused on integration of application, hardware, and software research and development needed to effectively use an exascale system.

America has reaped enormous benefits from this leadership position. HPC has been a major factor in accelerating U.S. scientific and engineering progress. It has enabled the advancement of climate modeling leading us to better understand our impact on climate. It has enabled development of advanced manufacturing techniques and rapid prototyping. And it has ensured national security through stewardship of the nation’s nuclear stockpile in the absence of testing. Today, however, America’s leadership in HPC is under challenge as never before, as nations in Asia and Europe invest heavily in HPC research and deploy new systems.

Entering the exascale-era represents a starting point for a new generation of computing. The DOE is committed to helping scientists and computing facilities prepare for this new generation and to pushing the boundaries of computing through exascale and beyond.

High-Performance Computing

High-performance computing (HPC) is the art and science of using groups of cutting edge computer systems to perform complex simulations, computations, and data analysis out of reach for standard commercial compute systems available.

What is HPC?

HPC computer systems are characterized by their high-speed processing power, high-performance networks, and large-memory capacity, generating the capability to perform massive amounts of parallel processing. A supercomputer is a type of HPC computer that is highly advanced and provides immense computational power and speed, making it a key component of high-performance computing systems.

In recent years, HPC has evolved from a tool focused on simulation-based scientific investigation to a dual role running simulation and machine learning (ML) . This increase in scope for HPC systems has gained momentum because the combination of physics-based simulation and ML has compressed the time to scientific insight for fields such as climate modeling, drug discovery, protein folding, and computational fluid dynamics (CFD).

Basic system architecture of a supercomputer

The basic system architecture of a supercomputer.

One key enabler driving this evolution of HPC and ML is the development of graphics processing unit (GPU) technology. GPUs are specialized computer chips designed to process large amounts of data in parallel, making them ideal for some HPC, and are currently the standard for ML/AI computations. The combination of high-performance GPUs with software optimizations has enabled HPC systems to perform complex simulations and computations much faster than traditional computing systems.

Why Is HPC Important?

High-performance computing is important for several reasons:

  • Speed and Efficiency : HPC systems can perform complex calculations much faster than traditional computers, allowing researchers and engineers to tackle large-scale problems that would be infeasible with conventional computing resources.
  • Scientific Discovery: HPC is critical for many scientific disciplines, including climate modeling, molecular dynamics, and computational fluid dynamics. It allows researchers to simulate complex systems and processes, leading to new insights and discoveries.
  • Product Design and Optimization: HPC is widely used in industries such as aerospace, automotive, and energy to simulate and optimize the design of products, processes, and materials, improving their performance and reducing development time.
  • Data Analysis: HPC is also essential for analyzing large datasets, such as those generated by observational studies, simulations, or experiments. It enables researchers to identify patterns and correlations in the data that would be difficult to detect using traditional computing resources.
  • Healthcare: HPC is increasingly being used in healthcare to develop new treatments and therapies, including personalized medicine, drug discovery, and molecular modeling.

HPC has revolutionized the way research and engineering are conducted and has had a profound impact on many aspects of our lives, from improving the efficiency of industrial processes to disaster response and mitigation to furthering our understanding of the world around us.

How Does HPC Work?

High-performance computing works by combining the computational power of multiple computers to perform large-scale tasks that would be infeasible on a single machine. Here is how HPC works: 

  • Cluster Configuration: An HPC cluster is made up of multiple computers, or nodes, that are connected by a high-speed network . Each node is equipped with one or more processors, memory, and storage. 
  • Task Parallelization: The computational work is divided into smaller, independent tasks that can be run simultaneously on different nodes in the cluster. This is known as task parallelization. 
  • Data Distribution: The data required for the computation is distributed among the nodes, so that each node has a portion of the data to work on. 
  • Computation: Each node performs its portion of the computation in parallel, with the results being shared and ultimately integrated until the work proceeds to completion. 
  • Monitoring and Control: The cluster includes software tools that monitor the performance of the nodes and control the distribution of tasks and data. This helps ensure that the computation runs efficiently and effectively. 
  • Output: The final output is the result of the combined computation performed by all the nodes in the cluster. The output is generally saved to a large, parallel file system and/or rendered graphically into images or other visual depictions to facilitate discovery, understanding, and communication. 

By harnessing the collective power of many computers, HPC enables large-scale simulations, data analysis, and other compute-intensive tasks to be completed in a fraction of the time it would take on a single machine.

What Is an HPC Cluster?

A high-performance computing cluster is a collection of tightly interconnected computers that work in parallel as a single system to perform large-scale computational tasks. HPC clusters are designed to provide high performance and scalability, enabling scientists, engineers, and researchers to solve complex problems that would be infeasible with a single computer. 

An HPC cluster typically consists of many individual computing nodes, each equipped with one or more processors, accelerators, memory, and storage. These nodes are connected by a high-performance network, allowing them to share information and collaborate on tasks. In addition, the cluster typically includes specialized software and tools for managing resources, such as scheduling jobs, distributing data, and monitoring performance. Application speedups are accomplished by partitioning data and distributing tasks to perform the work in parallel.

Training data speedup using traditional HPC.

Source: Adapted from graph data presented in Convergence of Artificial Intelligence and High-Performance Computing on NSF-Supported Cyberinfrastructure | Journal of Big Data | Full Text (springeropen.com)

HPC Use Cases

Climate modeling.

Climate models are used to simulate the behavior of the Earth's climate, including the atmosphere, oceans, and land surfaces. These simulations can be computationally intensive and require large amounts of data and parallel computing, making them ideal for GPU-accelerated HPC systems. By using GPUs and other parallel processing techniques, climate scientists can run more detailed and accurate simulations, which in turn lead to a better understanding of the Earth's climate and the impacts of human activities.  As this use case continues to progress, the predictive capabilities will grow and can be used to design effective mitigation and adaptation strategies.

Drug Discovery

The discovery and development of new drugs is a complex process that involves the simulation of millions of chemical compounds to identify those that have the potential to treat diseases. Traditional methods of drug discovery have been limited by insufficient computational power, but HPC and GPU technology allow scientists to run more detailed simulations and deploy more effective AI algorithms, resulting in the discovery of new drugs at a faster pace.

Protein Folding

Protein folding refers to the process by which proteins fold into three-dimensional structures, which are critical to their function. Understanding protein folding is critical to the development of treatments for diseases such as Alzheimer's and cancer. HPC and GPU technology are enabling scientists to run protein-folding simulations more efficiently, leading to a better understanding of the process and accelerating the development of new treatments

Computational Fluid Dynamics

Computational fluid dynamics (CFD) simulations are used to model the behavior of fluids in real-world systems, such as the flow of air around an aircraft. HPC and GPU technology let engineers run more detailed and accurate CFD simulations, which help improve the designs for systems such as wind turbines, jet engines, and transportation vehicles of all types.

Climate Modeling

HPC and ML/Al are having a significant impact on climate modeling, which is used to simulate the behavior of the Earth.

HPC Applications

Some of the most used high-performance computing applications in science and engineering include:

  • Molecular dynamics simulation
  • Computational fluid dynamics
  • Climate modeling
  • Computational chemistry
  • Structural mechanics and engineering
  • Electromagnetic simulation
  • Seismic imaging and analysis
  • Materials science and engineering
  • Astrophysical simulation
  • Machine learning and data analysis

There are many computer codes used for molecular dynamics (MD) simulations, but some of the most frequently used ones are:

  • Groningen Machine for Chemical Simulation (GROMACS)
  • Assisted Model Building With Energy Refinement (AMBER)
  • Chemistry at Harvard Molecular Mechanics (CHARMM)
  • Large-Scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) Nanoscale Molecular Dynamics (NAMD)

There are several computer codes used for CFD simulations, but some of the most used ones are:

  • Ansys Fluent
  • COMSOL Multiphysics

There are many computer codes used for climate modeling, but some of the most used ones are:

  • Community Earth System Model (CESM)
  • Model for Interdisciplinary Research on Climate (MIROC)
  • Geophysical Fluid Dynamics Laboratory (GFDL) climate model
  • European Centre for Medium-Range Weather Forecasts (ECMWF) model
  • UK Met Office Unified Model (MetUM)
  • Max Planck Institute for Meteorology (MPI-M) Earth system model

There are several computer codes used for computational chemistry, but some of the most used ones are:

  • Quantum ESPRESSO
  • Molecular Orbital Package (MOPAC)
  • Amsterdam Density Functional (ADF)

There are many computer codes used for machine learning, but some of the most used ones are:

  • scikit-learn

These codes provide a wide range of ML algorithms, including supervised and unsupervised learning, deep learning, and reinforcement learning. They’re widely used for tasks such as image and speech recognition, natural language processing, and predictive analytics, and they’re essential tools for solving complex problems in areas such as computer vision, robotics, and finance.

How Can You Get Started With HPC?

Here are some ways you get started in high-performance computing:

  • Familiarize yourself with the basics of computer architecture, operating systems, and programming languages, particularly those commonly used for high-performance computing (such as C, C++, Fortran, and Python). 
  • Study parallel and distributed computing concepts, including parallel algorithms, interprocess communication, and synchronization. 
  • Get hands-on experience with high-performance computing tools and systems, such as clusters, GPUs, and Message Passing Interface (MPI). You can use online resources, such as the NVIDIA Deep Learning Institute , or try running simulations on public computing clusters. 
  • Read research papers and books on the subject to learn about the latest advances and real-world applications of high-performance computing. 
  • Consider taking online courses or enrolling in a degree program in computer science, engineering, or a related field to get a more comprehensive understanding of the subject. 
  • Participate in coding challenges and hackathons focused on high-performance computing to improve your practical skills. 
  • Join online communities, such as the NVIDIA Developer Program , and attend workshops and conferences to network with professionals and stay up to date on the latest developments in the field.

Read Our Free HPC Ebook

In HPC for the Age of AI and Cloud Computing , discover best practices for system design, gain a deeper understanding of the current and upcoming technology trends, learn from diverse use cases, hear from NVIDIA experts, and more.

Check out HPC Blogs

From news highlighting the latest breakthroughs in HPC to deep-dive technical walkthroughs showcasing how you can use NVIDIA solutions, there’s a blog to answer your HPC questions.

  • Company Overview
  • Venture Capital (NVentures)
  • NVIDIA Foundation
  • Corporate Sustainability
  • Technologies
  • Company Blog
  • Technical Blog
  • Stay Informed
  • Events Calendar
  • GTC AI Conference
  • NVIDIA On-Demand
  • Executive Insights
  • Startups and VCs
  • NVIDIA Connect for ISVs
  • Documentation
  • Technical Training
  • Training for IT Professionals
  • Professional Services for Data Science

hpc research

  • Privacy Policy
  • Manage My Privacy
  • Do Not Sell or Share My Data
  • Terms of Service
  • Accessibility
  • Corporate Policies
  • Product Security

Hyperion Research Logo

What We Do for High Performance Computing

We help it professionals, business executives, and the investment community make fact-based decisions on technology purchases and business strategy. specifically, we offer:.

hyperion-service-1

Traditional and Emerging HPC

hyperion-service-5

HPC User Forums

hyperion-service-2

Worldwide High Performance Technical Server QView

hyperion-service-3

Artificial Intelligence-High Performance Data Analysis Intelligence (HPDA-AI)

Cloud Computing Program

Cloud Computing Program

Quantum Computing

Quantum Computing Continuing Information Service

hyperion-service-4

Consulting Services

Small and Emerging

HPC End-User Multi-Client Study 2024

HyperionGlobal_Icon

Worldwide HPC Server, Verticals and Countries Forecast Database

For more than 35 years, the industry analysts at Hyperion Research have been at the forefront of helping private and public organizations and government agencies make intelligent, fact-based decisions related to business impact and technology direction in the complex and competitive landscape of advanced computing and emerging technologies.

HPC Provides Economic ROI

Economic leadership increasingly results from a nation's, an industry's, or an enterprise's application of supercomputers in innovative and productive ways. The U.S. Department of Energy Office of Science, Office of Advanced Scientific Computing Research, and the National Nuclear Security Administration fund the dissemination of information about ROI (return on investment) and ROR (return on research) from projects enabled by HPC.

HPC User Forum

HPC User Forum was established in 1999 to promote the health of the global HPC industry and address issues of common concern to users. Each year, two full-membership meetings are held in the United States and two in international locations. The User Forum is directed by a volunteer Steering Committee of leading users from government, industry/commerce and academia, and is operated by Hyperion Research for the benefit of HPC users.

HPC Innovation Awards

The HPC Innovation Awards are given twice a year to organizations that have made outstanding achievements using high performance computing. The three award categories showcase ROI and success stories showing HPC’s impact on increasing economic value, advancing scientific innovation and engineering progress, and on improving the quality of life worldwide.

hpc research

Some of Our Clients

hpc research

Username or email  *

Password  *

Remember me Login

Lost your password?

Research in high performance computing

  • High Performance and Parallel Computing
  • Computer Science
  • Research Areas

High-performance computing­—including scientific computing, high-end computing, and supercomputing—involves the study of hardware and software systems, algorithms, languages, and architectures to advance computational problem solving at the largest scales. HPC research aims to increase the performance, energy efficiency, and intelligence of today’s largest scale systems and applications.

Illinois Tech’s strong group of researchers conducts vibrant research to open opportunities in several aspects of HPC including memory and storage systems, scalable software and data management, data-intensive computing, specialized operating systems, language support for HPC, cluster management, interconnection networks, and energy efficiency. Close collaboration with researchers in other scientific disciplines such as physics, chemistry, biology, astronomy, and engineering, creates further opportunities.

Joint research faculty appointments between the Department of Computer Science and Argonne National Laboratory open opportunities at a world-class United States Department of Energy laboratory close to the Illinois Tech campus. These joint faculty members collaborate with Illinois Tech researchers and serve as co-advisers to Ph.D. candidates in all areas of HPC.

Research Topics

  • Parallel Computing
  • Scalable Software Systems
  • Scientific Computing
  • Runtime Systems
  • High-Performance Storage and I/O
  • Interconnect Networks and Communications
  • Resource Management and Scheduling
  • Specialized Operating Systems
  • Data-Intensive Systems
  • Power and Energy Efficiency
  • Fault Tolerance
  • Modeling and Simulation

Affiliated Labs

DataSys Lab

Affiliated Faculty

Xian He Sun

Xian-He Sun

Distinguished Professor of Computer Science

Ron Hochsprung Endowed Chair of Computer Science

Research focus: Parallel and Distributed Processing

Ioan Raicu

Associate Professor of Computer Science

Associate Program Director Master of Data Science

Research focus: Distributed Systems

320x355.Kougkas.A

Antonios Kougkas

Research Assistant Professor of Computer Science

Research focus: Parallel I/O

Kyle Hale

Assistant Professor of Computer Science

Research focus: High-Performance Computing

Stefan Muller

Stefan Muller

Gladwin Development Chair Assistant Professor of Computer Science

Research focus: Parallel Computing

Teaching Assistant Professor Gerald Balekaki

Gerald Balekaki

Teaching Assistant Professor

Research focus: High Performance and Parallel Computing

Learn more...

High Performance Computing

High Performance Computing (HPC) integrates high end hardware with highly tuned software to solve complex scientific problems. HPC simplifies and accelerates substantial computational components in a number of essential fields, such as genome sequencing and molecular dynamics in computational biology or cyber physical domains. HPC also applies data driven and AI techniques to diagnose and solve performance challenges in HPC systems themselves. HPC research at CISE focuses on improving performance, reducing cost, and making the systems more energy efficient by applying intelligent and interdisciplinary methods. Research areas cross-cut design automation (EDA), computer architecture, computer systems, and applied machine learning.

Innovative Energy Efficiency: Fisheye Cameras in Smart Spaces

Imagine a bustling corporate office building where energy consumption needs to be balanced with maintaining a comfortable environment for employees. In such settings, traditional methods of regulating air handling systems can lead to inefficiencies and waste energy in unoccupied areas. This is where the research of Boston University Professors and CISE affiliates Thomas Little, Janusz […]

The Future of Driving: Control Barrier Functions and the Internet of Vehicles

The National Highway Traffic and Safety Association reports that 94% of serious car crashes are due to human error. Christos Cassandras, Boston University Distinguished Professor of Electrical & Computer Engineering, Head of the Division of Systems Engineering, and a co-founder of the Center for Information & Systems Engineering (CISE), has made monumental contributions to the […]

AI for Cloud Ops Project Featured in “Red Hat Research Quarterly”

Ayse Coskun (ECE) was featured on the cover of the  “Red Hat Research Quarterly” (RHRQ) May 2022 issue. Coskun discusses the need for operations-focused research on real-world systems and how artificial intelligence (AI)  can push analytics to the speed of software deployment. Coskun is one of the Principal Investigators on the project “AI for Cloud Ops,” which […]

Balancing electricity demands and costs of high-performance computing

Some computational problems are so complex that they require many times more calculations than a typical laptop can handle. High performance computing (HPC) combines a number of computer servers to carry out these large-scale computations that often operate on massive data sets. But data centers for HPC require immense amounts of power and limiting power […]

Professor Coskun and Team Will Collaborate with Sandia Labs on Applying AI to HPC

Professors Ayse Coskun, Manuel Egele, and Brian Kulis in ECE have received a $500K grant from Sandia National Labs for their project “Al-based Scalable Analytics for Improving Performance, Resilience, and Security of HPC Systems”. HPC refers to High Performance Computing. This is the practice of collecting computing power so that a large system delivers high […]

Automated Analytics for Improving Efficiency, Safety, and Security of HPC Systems

Performance variations are becoming more prominent with new generations of large-scale High Performance Computing (HPC) systems. Understanding these variations and developing resilience to anomalous performance behavior are critical challenges for reaching extreme-scale computing. To help address these emerging performance variation challenges, there is increasing interest in designing data analytics methods to make sense out of the […]

AI-based Scalable Analytics for Improving Performance, Resilience, and Security of HPC Systems

Next generation large-scale  High Performance Computing (HPC) systems face important cost and scalability challenges due to anomalous system and application behavior resulting in wasted compute cycles and the ever-growing difficulty of system management. There is an increasing interest in the HPC community in using AI-based frameworks to tackle analytics and management problems in HPC so […]

Ayse Coskun Recognized in Computing Research

Prof. Coskun earns two awards and two major grants in computing research Energy-efficient computing expert Professor Ayse Coskun was most recently recognized with grants from the National Science Foundation (NSF) and Sandia National Laboratories, a best paper award, and an early career award. NSF awarded the interdisciplinary team led by Prof. Coskun $234K (total award $700K, shared between […]

Paschalidis Hosts Symposium on Control and Network Systems

in NEWS   by Liz Sheeley The 2nd Symposium on the COntrol of NEtwork Systems (SCONES) will be held on Monday, October 16 and Tuesday, October 17, 2017, at the Boston University Photonics Center. SCONES is being hosted by Professor Ioannis Paschalidis (ECE, BME, SE), the Editor-in-Chief of the IEEE Transactions on Control of Network Systems (TCNS), a publication sponsored by the IEEE Control Systems […]

SHF: Small: Reclaiming Dark Silicon via 2.5D Integrated Systems with Silicon Photonic Networks

Emerging applications in the growing domains of cloud, internet-of-things, and high-performance computing require higher levels of parallelism and much larger data transfers compared to applications of the past. In tandem, power and thermal constraints limit the number of transistors that can be used simultaneously on a chip and this limit has led to the ?Dark […]

Penn Arts & Sciences Logo

  • University of Pennsylvania
  • School of Arts and Sciences
  • Penn Calendar

Penn Arts and Sciences

Technology for Research: High Performance Computing

  • Faculty & Staff

High Performance Computing (HPC)

Although desktop computers continue to get more powerful every year, many researchers in SAS find that they need more computing power than a desktop can provide. High Performance Computing, or HPC, encompasses a wide variety of specialized computing processes that require lots of computing power in the form of many cores, a large amount of system memory, or even many such computers connected together in a cluster.

These researchers might need a computing system that can crunch very large amounts of data, handle the complexity of millions of iterations in a simulation program, or execute jobs that require massively parallel calculations.

In the School of Arts and Sciences, researchers have several options for HPC, depending on their needs and the resources.

  • GPC : The new General Purpose Cluster, or GPC , provides high-performance computing resources to researchers who need at least some access to HPC but don't necessarily need to make the investment in a privately managed cluster.
  • Social Sciences Computing (SSC): There are some shared servers for HPC in the Social Sciences departments; interested users can request an account on those servers, including Tesla and Hawk.

HPC at Penn

  • PMACS at PSOM : The Penn Medicine Academic Computing Support (PMACS) department has invested in a large, on-premises HPC environment where computing power can be rented by the core-hour by anyone at Penn, not only PSOM researchers. 

HPC Grants from the NSF

  • XSEDE : The Extreme Science and Engineering Discovery Environment, or XSEDE , is supported by the National Science Foundation. XSEDE makes supercomputers, massive data sets, and some expertise available to researchers who apply for and are awarded a grant. Interested researchers can contact our Campus Champion to get help and advice on the grant submission process. 

For Further Assistance

For assistance with any of the above options or if you do not see something that you are looking for, please contact your LSP , who can provide additional information and expertise to help you find the right resource.

hpc research

What’s New in HPC Research: Reinventing HPC, ParTypes, Crossbar AI Accelerator & More

By Mariana Iriarte

November 4, 2022

In this regular feature,  HPCwire  highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here.

Reinventing high performance computing: challenges and opportunities

hpc research

In this paper by a team of researchers from the University of Utah, University of Tennessee and Oak Ridge National Laboratory, the researchers dive into the challenges and opportunities associated with the state of high-performance computing. The researchers argue “that current approaches to designing and constructing leading-edge high-performance computing systems must change in deep and fundamental ways, embracing end-to-end co-design; custom hardware configurations and packaging; large-scale prototyping, as was common thirty years ago; and collaborative partnerships with the dominant computing ecosystem companies, smartphone and cloud computing vendors.” To prove their point, the authors provide a history of computing, discuss the economic and technological shifts in cloud computing, and the semiconductor concerns that are enabling the use of multichip modules. They also provide a summary of the technological, economic, and future directions of scientific computing.

The paper provided inspiration for the upcoming SC22 panel session of the same name, “Reinventing High-Performance Computing.”

Authors: Daniel Reed, Dennis Gannon, and Jack Dongarra 

A type discipline for message passing parallel programs

Researchers from the University of Lisbon and University of the Azores (Portugal), Copenhagen University and DCR Solutions A/S (Denmark), and Imperial College London (UK) developed a type of discipline for parallel programs called ParTypes. In this research article published in the ACM Transactions on Programming Languages and Systems journal, the team of researchers focused “on a model of parallel programming featuring a fixed number of processes, each with its local memory, running its own program and communicating exclusively by point-to-point synchronous message exchanges or by synchronizing via collective operations, such as broadcast or reduce.” Researchers argue that the “Type-based approaches have clear advantages against competing solutions for the verification of the sort of functional properties that can be captured by types.”

Authors: Vasco T. Vasconcelos, Francisco Martins, Hugo A. López, and Nobuko Yoshida

Measurement-based estimator scheme for continuous quantum error correction 

hpc research

An international team of researchers from Okinawa Institute of Science and Technology Graduate University (Japan), Trinity College (Ireland), and the University of Queensland (Australia) developed a continuous quantum error correction (MBE-CQEC) scheme. In this paper published by the American Physical Society in the Physical Review Research journal, researchers demonstrated that by creating a “measurement-based estimator (MBE) of the logical qubit to be protected, which is driven by the noisy continuous measurement currents of the stabilizers, it is possible to accurately track the errors occurring on the physical qubits in real time.” According to researchers, by using the MBE, the newly developed scheme surpassed the performance of canonical discrete quantum error correction (DQEC) schemes, which “use projective von Neumann measurements on stabilizers to discretize the error syndromes into a finite set, and fast unitary gates are applied to recover the corrupted information.” The scheme also “allows QEC to be conducted either immediately or in delayed time with instantaneous feedback.”

Authors: Sangkha Borah, Bijita Sarma , Michael Kewming, Fernando Quijandría, Gerard J. Milburn, and Jason Twamley

GPU-based data-parallel rendering of large, unstructured, and non-convexly partitioned data

A team of international researchers leveraged Texas Advanced Computing Center’s Frontera supercomputer to “interactively render both Fun3D Small Mars Lander (14 GB / 798.4 million finite elements) and Huge Mars Lander (111.57 GB / 6.4 billion finite elements) data sets at 14 and 10 frames per second using 72 and 80 GPUs.” Motivated by Fun3D Mars Lander simulation data, the researchers from Bilkent University (Turkey), NVIDIA Corp. (California, USA), Bonn-Rhein-Sieg University of Applied Sciences (Germany), NESC-TEC & University of Minho (Portugal), and the University of Utah (Utah, USA) introduced a “GPU-based, scalable, memory-efficient direct volume visualization framework suitable for in situ and post hoc usage.” In this paper, the researchers described the approach’s ability to reduce “memory usage of the unstructured volume elements by leveraging an exclusive or-based index reduction scheme and provides fast ray-marching-based traversal without requiring large external data structures built over the elements themselves.” In addition, they also provide details on the team’s development of the “GPU-optimized deep compositing scheme that allows correct order compositing of intermediate color values accumulated across different ranks that works even for non-convex clusters.”

Authors: Alper Sahistan, Serkan Demirci, Ingo Wald, Stefan Zellmann, João Barbosa, Nathan Morrical, Uğur Güdükbay

hpc research

Not all GPUs are created equal: characterizing variability in large-scale, accelerator-rich systems

University of Wisconsin-Madison researchers conducted a study with the goal “to understand GPU variability in large scale, accelerator-rich computing clusters.” In this paper, researchers seek to characterize the extent of variation due to GPU power management in modern HPC and supercomputing systems.” Leveraging Oak Ridge’s Summit, Sandia’s Vortex, TACC’s Frontera and Longhorn, and Livermore’s Corona, the researchers “collect over 18,800 hours of data across more than 90% of the GPUs in these clusters.” The results show an “8% (max 22%) average performance variation even though the GPU architecture and vendor SKU are identical within each cluster, with outliers up to 1.5× slower than the median GPU.”

Authors: Prasoon Sinha, Akhil Guliani, Rutwik Jain, Brandon Tran, Matthew D. Sinclair, and Shivaram Venkataraman 

Tools for quantum computing based on decision diagrams

In this paper published in ACM Transactions on Quantum Computing , Austrian researchers from the Johannes Kepler University Linz and Software Competence Center Hagenberg provide an introduction to tools designated for the development of quantum computing for users and developers alike. To start, the researchers “review the concepts of how decision diagrams can be employed, e.g., for the simulation and verification of quantum circuits.” Then they present a “visualization tool for quantum decision diagrams, which allows users to explore the behavior of decision diagrams in the design tasks mentioned above.” Lastly, the researchers dive into “decision diagram-based tools for simulation and verification of quantum circuits using the methods discussed above as part of the open-source Munich Quantum Toolkit.” The tools and additional information are publicly available on GitHub at https://github.com/cda-tum/ddsim .

Authors: Robert Wille, Stefan Hillmich, and Lukas Burgholzer

hpc research

Scalable coherent optical crossbar architecture using PCM for AI acceleration

University of Washington computer engineers developed an “optical AI accelerator based on a crossbar architecture.” According to the researchers, the chip’s design addressed the “lack of scalability, large footprints and high power consumption, and incomplete system-level architectures to become integrated within existing datacenter architecture for real-world applications.” In this paper, the University of Washington researchers also provided “system-level modeling and analysis of our chip’s performance for the Resnet50V1.5, considering all critical parameters, including memory size, array size, photonic losses, and energy consumption of peripheral electronics.” The results showed that “a 128×128 proposed architecture can achieve inference per second (IPS) similar to Nvidia A100 GPU at 15.4× lower power and 7.24× lower area.”

Authors: Dan Sturm and Sajjad Moazeni 

Do you know about research that should be included in next month’s list? If so, send us an email at [email protected] . We look forward to hearing from you.

Leading Solution Providers

Altair

Off The Wire

Industry headlines.

September 19, 2024

  • NCSA Partners in $20M SkAI Initiative to Advance AI-Driven Astrophysics Research
  • Pasqal and Université de Sherbrooke Partner to Advance Quantum Computing Research and Education
  • LiquidStack Raises $20M to Expand Commercial and R&D Efforts in Liquid Cooling
  • Supermicro’s New Multi-Node Liquid Cooled Architecture with Maximum Performance Density Purpose-Built for HPC at Scale
  • HSBC and Quantinuum Test Quantum-Resistant Security for Digital Gold Assets
  • Quantum Machines Integrates Qruise’s ML Software for Enhanced Quantum Control
  • NSF and Simons Foundation Launch New AI Institutes to Help Astronomers Understand the Cosmos
  • Alice & Bob and Thales Partner on Quantum Algorithms for Aerospace Equipment Design

September 18, 2024

  • SiFive Unveils XM Series for Accelerating High-Performance AI Workloads
  • OLCF Gathers Users for 20th Annual Meeting
  • QuEra Computing Strengthens Leadership Team with Appointment of Ed Durkin as CFO
  • QunaSys Launches QURI Chemistry in IBM’s Qiskit Functions Catalog for Chemical Innovation
  • David A. Padua to Receive ACM-IEEE CS Ken Kennedy Award
  • Quantum Brilliance, ParityQC Awarded Contract to Develop Mobile Quantum Computer by 2027

September 17, 2024

  • Qedma’s Error Mitigation Software Now Available within IBM’s Qiskit Functions
  • Jefferson Lab Collaboration Maps 3D Structure of Hadrons Using Supercomputing
  • Linux Foundation Unveils Valkey 8.0 with Enhanced Performance and Observability
  • ZutaCore and Munters Launch Integrated Cooling Solution to Optimize AI Data Centers
  • QMill Raises €4M in Seed Funding to Advance Practical Quantum Computing Solutions
  • Intel and AWS Expand Strategic Collaboration, Helping Advance U.S.-Based Chip Manufacturing

More Off The Wire

Off The Wire Industry Headlines

Subscribe to hpcwire's weekly update.

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

  • Editor’s Picks
  • Most Popular

hpc research

EU Spending €28 Million on AI Upgrade to Leonardo Supercomputer

The seventh fastest supercomputer in the world, Leonardo, is getting a major upgrade to take on AI workloads. The EuroHPC JU is spending €28 million to upgrade Leonardo to include new GPUs, CPUs and "high-bandwidth mem Read more…

hpc research

Google’s DataGemma Tackles AI Hallucination

The rapid evolution of large language models (LLMs) has fueled significant advancement in AI, enabling these systems to analyze text, generate summaries, suggest ideas, and even draft code. However, despite these impress Read more…

hpc research

Quantum and AI: Navigating the Resource Challenge

Rapid advancements in quantum computing are bringing a new era of technological possibilities. However, as quantum technology progresses, there are growing concerns about the availability of resources—a challenge remin Read more…

Shutterstock_2176157037

Intel’s Falcon Shores Future Looks Bleak as It Concedes AI Training to GPU Rivals

Intel's Falcon Shores future looks bleak as it concedes AI training to GPU rivals On Monday, Intel sent a letter to employees detailing its comeback plan after an abysmal second-quarter earnings report with critics calli Read more…

hpc research

AI Helps Researchers Discover Catalyst for Green Hydrogen Production

September 16, 2024

Researchers from the University of Toronto have used AI to generate a “recipe” for an exciting new catalyst needed to produce green hydrogen fuel. As the effects of climate change begin to become more apparent in our Read more…

hpc research

The Three Laws of Robotics and the Future

September 14, 2024

Isaac Asimov's Three Laws of Robotics have captivated imaginations for decades, providing a blueprint for ethical AI long before it became a reality. First introduced in his 1942 short story "Runaround" from the "I, R Read more…

The rapid evolution of large language models (LLMs) has fueled significant advancement in AI, enabling these systems to analyze text, generate summaries, sugges Read more…

Rapid advancements in quantum computing are bringing a new era of technological possibilities. However, as quantum technology progresses, there are growing conc Read more…

Intel's Falcon Shores future looks bleak as it concedes AI training to GPU rivals On Monday, Intel sent a letter to employees detailing its comeback plan after Read more…

Isaac Asimov's Three Laws of Robotics have captivated imaginations for decades, providing a blueprint for ethical AI long before it became a reality. First i Read more…

hpc research

GenAI: It’s Not the GPUs, It’s the Storage

September 12, 2024

A recent news release from Data storage company WEKA and S&P Global Market Intelligence unveiled the findings of their second annual Global Trends in AI rep Read more…

Shutterstock 793611091

Argonne’s HPC/AI User Forum Wrap Up

September 11, 2024

As fans of this publication will already know, AI is everywhere. We hear about it in the news, at work, and in our daily lives. It’s such a revolutionary tech Read more…

hpc research

Quantum Software Specialist Q-CTRL Inks Deals with IBM, Rigetti, Oxford, and Diraq

September 10, 2024

Q-CTRL, the Australia-based start-up focusing on quantum infrastructure software, today announced that its performance-management software, Fire Opal, will be n Read more…

hpc research

AWS’s High-performance Computing Unit Has a New Boss

Amazon Web Services (AWS) has a new leader to run its high-performance computing GTM operations. Thierry Pellegrino, who is well-known in the HPC community, has Read more…

hpc research

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

hpc research

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

hpc research

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

Shutterstock_1687123447

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

hpc research

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1024337068

Researchers Benchmark Nvidia’s GH200 Supercomputing Chips

September 4, 2024

Nvidia is putting its GH200 chips in European supercomputers, and researchers are getting their hands on those systems and releasing research papers with perfor Read more…

hpc research

IonQ Plots Path to Commercial (Quantum) Advantage

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Read more…

hpc research

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

Viridien

Contributors

Tiffany Trader

Tiffany Trader

Editorial director.

Douglas Eadline

Douglas Eadline

Managing editor.

John Russell

John Russell

Senior editor.

Kevin Jackson

Kevin Jackson

Contributing editor.

Ali Azhar

Alex Woodie

Addison Snell

Addison Snell

Drew Jolly

Assistant Editor

hpc research

Intel’s Next-gen Falcon Shores Coming Out in Late 2025 

April 30, 2024

It's a long wait for customers hanging on for Intel's next-generation GPU, Falcon Shores, which will be released in late 2025.  "Then we have a rich, a very Read more…

hpc research

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

hpc research

xAI Colossus: The Elon Project

September 5, 2024

Elon Musk's xAI cluster, named Colossus (possibly after the 1970 movie about a massive computer that does not end well), has been brought online. Musk recently Read more…

hpc research

Department of Justice Begins Antitrust Probe into Nvidia

August 9, 2024

After months of skyrocketing stock prices and unhinged optimism, Nvidia has run into a few snags – a  design flaw in one of its new chips and an antitrust pr Read more…

hpc research

MLPerf Training 4.0 – Nvidia Still King; Power and LLM Fine Tuning Added

June 12, 2024

There are really two stories packaged in the most recent MLPerf  Training 4.0 results, released today. The first, of course, is the results. Nvidia (currently Read more…

hpc research

Spelunking the HPC and AI GPU Software Stacks

June 21, 2024

As AI continues to reach into every domain of life, the question remains as to what kind of software these tools will run on. The choice in software stacks – Read more…

hpc research

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1886124835

Researchers Say Memory Bandwidth and NVLink Speeds in Hopper Not So Simple

July 15, 2024

Researchers measured the real-world bandwidth of Nvidia's Grace Hopper superchip, with the chip-to-chip interconnect results falling well short of theoretical c Read more…

arrow

  • Click Here for More Headlines

The Information Nexus of Advanced Computing and Data systems for a High Performance World

  • Our Publications
  • Live Events
  • Privacy Policy
  • Cookie Policy
  • About Tabor Communications
  • Update Subscription Preferences
  • California Consumers

© 2024 HPCwire. All Rights Reserved. A Tabor Communications Publication

HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.

Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.

Privacy Overview

Copy short link.

Institute for Data Engineering and Science

High performance computing.

man standing in a room filled with computers and screens

High performance computing is necessary for supporting all aspects of data-driven research. HPC-related research includes computer architecture, systems software and middleware, networks, parallel and high performance algorithms, and programming paradigms, and run-time systems for data science.

The Center for High Performance Computing

Rich Vuduc, Director

Georgia Tech is now an established leader in computational techniques and algorithms for high performance computing and massive data. The center aims to advance the state of the art in massive data and high performance computing technology, and exploit HPC to solve high-impact real-world problems. The inherent complexity of these problems necessitates both advances in high performance computing and breakthroughs in our ability to extract knowledge from and understand massive complex data. The center's focus is primarily on algorithms and applications. Recent big data research has been in the areas of graph analytics, big data analytics for high throughput DNA sequencing, converting electronic health records into clinical phenotypes, and determining composite characteristics of metal alloys. Both IDEaS and the HPC Center will be co-located in the upcoming Coda building in Tech Square.

Center for Research into Novel Computing Hierarchies (CRNCH)

Tom Conte, Director

Tom Conte has spearheaded the development of the Center for Research into Novel Computing Hierarchies (CRNCH) to address fundamental challenges in the design of computer architectures. In 2015, the IEEE “Rebooting Computing Initiative” posed a grand challenge to create a computer able to adapt to data-driven problems and ultimately emulate computation with the efficiency of the human brain. This challenge is largely motivated by the end of Moore’s Law, which refers to doubling in computer performance every 18 months that has held historically but has now been curtailed due to physical limitations. Massive data sets and the challenges arising from data research make leadership in novel architectures critical to Georgia Tech’s computational leadership.

Facebook

This website uses cookies.  For more information, review our  Privacy & Legal Notice Questions? Please email [email protected]. More Info Decline --> Accept

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 13 December 2021

Fighting COVID-19 with HPC

Nature Computational Science volume  1 ,  pages 769–770 ( 2021 ) Cite this article

4073 Accesses

2 Citations

42 Altmetric

Metrics details

  • Computational science
  • Epidemiology
  • Molecular dynamics
  • Virtual screening

A special prize for outstanding research in high-performance computing (HPC) applied to COVID-19-related challenges sheds light on the computational science community’s efforts to fight the ongoing global crisis.

Established in 1987, the ACM Gordon Bell Prize is a prestigious award given each year to recognize outstanding achievement in high-performance computing, rewarding the most innovative research that has applied HPC technology to various applications in science, engineering, and large-scale data analytics. Looking back on the history of the prize gives insights into how scientific computing capabilities have changed and substantially improved over time, as well as into the broad range of applications that can successfully make use of these powerful capabilities, including fluid dynamics simulations, molecular dynamics (MD) simulations, climate analytics, and classical simulations of quantum circuits, to name a few examples. It is certainly remarkable to see how the community comes up with creative solutions that allow for the more effective use of powerful computing facilities to run calculations that would otherwise be hard or even impossible to perform using the more modest resources of a typical desktop computer or workstation.

hpc research

In response to the ongoing pandemic, a separate prize was awarded for the first time last year, and awarded again this year, for outstanding research that uses HPC in innovative ways to deepen our understanding of the nature, spread and/or treatment of COVID-19. This prize highlights the efforts of the computational science community, with the help of HPC, in addressing one of the most difficult crises of our recent years. This year’s finalists for the special prize, presented at the SC21 conference , showcased the breadth of COVID-19-related challenges being targeted by the community, as well as the range of computational solutions being explored by researchers, which we would like to highlight here.

Some of the finalists focused on SARS-CoV-2 antiviral drug design. Determining potential drug candidates can be very costly: given the vast size of the chemical space, searching for inhibitors that more strongly bind to the target (for instance, SARS-CoV-2 M pro and PL pro proteases) entails running an exhaustive and expensive search. Jens Glaser and colleagues from Oak Ridge National Laboratory (ORNL) used a natural language processing approach in order to accelerate the screening process of potential drug candidates 1 . The team generated an unprecedented dataset of approximately 9.6 billion molecules using the SMILES (Simplified Molecular Input Line Entry System) text representation, and pre-trained a large deep learning language model on this input dataset: the model learned a representation for chemical structure in a completely unsupervised manner. This pre-training stage is computationally expensive, and the researchers used the ORNL Summit supercomputer — which is currently the fastest supercomputer in the United States, and the second fastest in the world — to accomplish this task. Then, a smaller dataset of known binding affinities between molecules (potential inhibitors) and targets was used to fine-tune the model for binding affinity prediction: the pre-trained model can be used for candidate generation, and the fine-tuned one can be used for choosing the candidates with greater binding affinity. Both models can run on computers with modest resources, thus making the drug screening stage more broadly accessible to the research community.

The ORNL team was not the only one that focused on drug design: Hai-Bin Luo and colleagues also targeted the large-scale screening process, but by using statistical mechanics-based methods instead of language models 2 . More specifically, this team focused on the free energy perturbation-absolute binding free energy prediction (FEP-ABFE) method, which samples microscopic states using MD or Monte Carlo simulations to predict macroscopic properties (such as properties related to binding affinity) of the target system. While FEP-ABFE can achieve good accuracy, it has an extremely high demand for computational resources, which hampers its use for large-scale drug screening. To address this issue, the researchers developed, among other techniques, a customized job management system to run this method in a scalable manner on the new generation of the Tianhe system, currently the seventh fastest supercomputer in the world. They virtually screened more than 3.6 million compounds from commercially available databases using docking methods, followed by the FEP-ABFE calculations for about 12,000 compounds to achieve FEP-based binding free energy results.

Other teams focused on better understanding different stages of the life cycle of SARS-CoV-2 with modeling and simulations. For instance, Arvind Ramanathan and colleagues explored the replication mechanism of SARS-CoV-2 in the host cell, which can provide insights into drug design 3 . Cryo-EM techniques have helped to elucidate the structural organization of the viral-RNA replication mechanism, but the overall resolution of the data is often poor, hindering the complete understanding of this mechanism. This team of researchers developed an iterative approach to improve the resolution within cryo-EM datasets by using MD simulations and finite element analysis. One of the challenges of the approach was the coupling of different resolutions, which was intelligently done by leveraging machine learning algorithms. In order to help balance the workload, the researchers used a single coordinated workflow across multiple geographically dispersed supercomputing facilities: Perlmutter, which is currently the fifth fastest supercomputer in the world and located in Berkeley, California, and ThetaGPU, which is an extension of Theta, currently the seventieth fastest supercomputer in the world and located at the Argonne National Laboratory in Illinois.

Makoto Tsubokura and colleagues, the winners of this year’s special prize, turned their attention to how COVID-19 is transmitted via droplets and aerosol particles 4 . To better understand and evaluate the risk of infection caused by droplets and aerosols, this team of researchers focused on simulating how droplets reach other individuals after being emitted from an infected person and transported through the air. These end-to-end simulations must take into account complex phenomena and geometries, including the surrounding environment, the physics of droplets and aerosols, any air flow activity induced by other elements (such as air-conditioning systems), the amount of people around, and so forth. The researchers implemented different computational fluid dynamics techniques to be able to scale the simulations to the Fugaku system, which is currently the fastest supercomputer in the world. Such simulations generated digital twins representing different transmission scenarios, and the results were widely communicated by the media and used to implement public policies in Japan.

Rommie Amaro and colleagues also focused their work on the airborne transmission of SARS-CoV-2: they developed a multiscale framework to study aerosolized viruses 5 . Studying these complex systems requires taking into account different resolutions (from nanometers to approximately one micron in size) and longer timescales (spanning microseconds to seconds): the multi-resolution requirement makes all-atom MD simulations very challenging and computationally expensive. Among the many technical contributions, the researchers ran and scaled the MD simulations on the Summit supercomputer, allowing them to develop an impressive one-billion-atom simulation of the aerosolized version of the SARS-CoV-2 Delta virion — the first ever simulation of a respiratory aerosol (see image). Such a simulation allows one to explore the composition, structure, and dynamics of respiratory aerosols, and can serve as a basis to develop new therapeutic solutions for COVID-19 (for instance, to find potential binding sites).

Last but not least, another finalist team focused on a different challenge: performing epidemic simulations. Madhav Marathe and colleagues developed a framework to generate real-time scenario projections, assessing the likelihood of epidemiological outcomes for possible future scenarios 6 . This can be used, for instance, to better allocate vaccine supplies, to evaluate the role of vaccine hesitancy, and to understand the impact of waning immunity, among other important studies for public health use. As part of their framework, the researchers built a digital twin of a time-varying social contact network of the United States using various national-scale datasets. This digital twin can then be brought to life by contextualizing it with current real-world conditions, using, again, different datasets of varying scales. After initializing the digital twin, a parallel agent-based socio-epidemic simulator, also developed by the researchers, can then be used to generate and analyze different scenarios. Because the simulations are very computationally intensive, a meta-scheduler for HPC clusters was explored in order to allow for the use of multiple clusters. In their analyses, the team used two supercomputers: Bridges-2, located at the Pittsburgh Supercomputing Center, and Rivanna, located at the University of Virginia. It is worth noting that the team has been doing scenario projections from the start of the pandemic for various state and federal agencies in the United States.

Overall, all of these research works presented not only technical contributions, which can certainly be used in other HPC applications, but also important frameworks and studies that can be used to improve our understanding of the ongoing pandemic and to better implement policies to decrease the spread of the virus. As new HPC technologies and new computing architectures are developed, such as supercomputers with exascale capabilities, more remarkable advances are expected from the computational science community.

Blanchard, A. E. et al. Language models for the prediction of SARS-CoV-2 inhibitors. In Int. Conf. High Performance Computing, Networking, Storage, and Analysis (ACM, 2021); https://sc21.supercomputing.org/proceedings/tech_paper/tech_paper_pages/gbv102.html

Li, Z. et al. FEP-based large-scale virtual screening for effective drug discovery against COVID-19. In Int. Conf. High Performance Computing, Networking, Storage, and Analysis (ACM, 2021); https://sc21.supercomputing.org/proceedings/tech_paper/tech_paper_pages/gbv105.html

Trifan, A. et al. Preprint at bioRxiv https://doi.org/10.1101/2021.10.09.463779 (2021).

Ando, K. et al. Preprint at https://arxiv.org/abs/2110.09769v3 (2021).

Dommer, A. et al. Preprint at bioRxiv https://doi.org/10.1101/2021.11.12.468428 (2021).

Bhattacharya, P. et al. Data-driven scalable pipeline using national agent-based models for real-time pandemic response and decision support. In Int. Conf. High Performance Computing, Networking, Storage, and Analysis (ACM, 2021); https://sc21.supercomputing.org/proceedings/tech_paper/tech_paper_pages/gbv103.html

Download references

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Fighting COVID-19 with HPC. Nat Comput Sci 1 , 769–770 (2021). https://doi.org/10.1038/s43588-021-00180-2

Download citation

Published : 13 December 2021

Issue Date : December 2021

DOI : https://doi.org/10.1038/s43588-021-00180-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

High-performance computing in undergraduate education at primarily undergraduate institutions in wisconsin: progress, challenges, and opportunities.

  • Jordan Hebert
  • Ryan Hratisch
  • Sudeep Bhattacharyya

Education and Information Technologies (2024)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

hpc research

Introduction to High-Performance Computing

  • First Online: 14 September 2023

Cite this protocol

hpc research

  • Marco Verdicchio 3 &
  • Carlos Teijeiro Barjas 3  

Part of the book series: Methods in Molecular Biology ((MIMB,volume 2716))

1167 Accesses

1 Citations

Since the first general-purpose computing machines came up in the middle of the twentieth century, computer science’s popularity has been growing steadily until our time. The first computers represented a significant leap forward in automating calculations, so that several theoretical methods could be taken from paper into practice. The continuous need for increased computing capacity made computers evolve and become more and more powerful. Nowadays, high-performance computing (HPC) is a crucial component of scientific and technological advancement. This book chapter introduces the field of HPC, covering key concepts and essential terminology to understand this complex and rapidly evolving area. The chapter begins with an overview of what HPC is and how it differs from conventional computing. It then explores the various components and configurations of supercomputers, including shared memory systems, distributed memory systems, and hybrid systems and the different programming models used in HPC, including message passing, shared memory, and data parallelism. Finally, the chapter discusses significant challenges and future directions in supercomputing. Overall, this chapter provides a comprehensive introduction to the world of HPC and is an essential resource for anyone interested in this fascinating field.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Hager G, Wellein G (2010) Introduction to high performance computing for scientists and engineers. CRC Press, Boca Raton. https://doi.org/10.1201/9781420078779

Book   Google Scholar  

Hennessy JL, Patterson DA (2018) Computer architecture: a quantitative approach, 6th edn. Morgan Kaufmann Publishers, Cambridge

Google Scholar  

Flynn MJ (1966) Very high-speed computing systems. Proc IEEE 54(12):1901–1909. https://doi.org/10.1109/PROC.1966.5508

Article   Google Scholar  

Flynn MJ (1972) Some computer organizations and their effectiveness. IEEE Trans Comput C-21(9):948–960. https://doi.org/10.1109/TC.1972.5009071

Stone JE, Gohara D, Shi G (2007) Accelerating large-scale scientific computations with GPUs. Comput Sci Eng 9(3):14–21. https://doi.org/10.1109/MCSE.2007.55

Hennessy JL, Patterson DA (2017) “NUMA: multicore and beyond,” computer architecture: a quantitative approach, 6th edn. Morgan Kaufmann Publishers, Cambridge, pp 603–612

Braam, P. J., Zahir, R. (2002). Lustre: A scalable, high performance file system. Cluster File Systems, Inc, 8(11): 3429–3441

Schmuck FB, Haskin RL (2002) IBM’s general parallel file system. IBM Syst J 41(4):685–698

Herold M, Behling M, Brehm M (2017) BeeGFS – a high-performance distributed file system for large-scale cluster computing. In: International conference on computational science. Springer, Cham, pp 882–891

Ylonen T, Lonvick C (2006) The Secure Shell (SSH) protocol architecture. RFC 4251:1–30

Yoo AB, Jette MA, Grondona M (2003) SLURM: Simple Linux Utility for Resource Management. In: Proceedings of the 2003 ACM/IEEE conference on supercomputing. IEEE, pp 1–11

Bryson M (1997) PBS: the portable batch system. In: Proceedings of the sixth workshops on enabling technologies: infrastructure for collaborative enterprises (WET ICE ‘97). IEEE, pp. 273–278

Grewal R, Collins J (2012) Sun grid engine: a review. Int J Adv Res Comput Sci 3(2):38–41

IBM (2018) IBM spectrum LSF. IBM. https://www.ibm.com/products/spectrum-lsf

Ritchie DM, Thompson K (1978) The UNIX time-sharing system. Commun ACM 21(7):365–376

Chapman B, Jost G, Van der Pas R (2008) Using OpenMP: portable shared memory parallel programming, vol 1. MIT Press, Cambridge

Gropp W, Lusk E, Skjellum A (1996) MPI: a message-passing interface standard. Int J Supercomput Appl High Perform Comput 10(4):295–308

Download references

Author information

Authors and affiliations.

SURF BV, Amsterdam, the Netherlands

Marco Verdicchio & Carlos Teijeiro Barjas

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Marco Verdicchio .

Editor information

Editors and affiliations.

In Silico Research and Development, Evotec UK Ltd, Abingdon, UK

Alexander Heifetz

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Science+Business Media, LLC, part of Springer Nature

About this protocol

Verdicchio, M., Teijeiro Barjas, C. (2024). Introduction to High-Performance Computing. In: Heifetz, A. (eds) High Performance Computing for Drug Discovery and Biomedicine. Methods in Molecular Biology, vol 2716. Humana, New York, NY. https://doi.org/10.1007/978-1-0716-3449-3_2

Download citation

DOI : https://doi.org/10.1007/978-1-0716-3449-3_2

Published : 14 September 2023

Publisher Name : Humana, New York, NY

Print ISBN : 978-1-0716-3448-6

Online ISBN : 978-1-0716-3449-3

eBook Packages : Springer Protocols

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

hpc research

Discoveries in weeks, not years: How AI and high-performance computing are speeding up scientific discovery

Computing has already accelerated scientific discovery. Now scientists say a combination of advanced AI with next-generation cloud computing is turbocharging the pace of discovery to speeds unimaginable just a few years ago.

Microsoft and the Pacific Northwest National Laboratory (PNNL) in Richland, Washington, are collaborating to demonstrate how this acceleration can benefit chemistry and materials science – two scientific fields pivotal to finding energy solutions that the world needs.

Scientists at PNNL are testing a new battery material that was found in a matter of weeks, not years, as part of the collaboration with Microsoft to use to advanced AI and high-performance computing (HPC), a type of cloud-based computing that combines large numbers of computers to solve complex scientific and mathematical tasks.

hpc research

As part of this effort, the Microsoft Quantum team used AI to identify around 500,000 stable materials in the space of a few days.

The new battery material came out of a collaboration using Microsoft’s Azure Quantum Elements to winnow 32 million potential inorganic materials to 18 promising candidates that could be used in battery development in just 80 hours. Most importantly, this work breaks ground for a new way of speeding up solutions for urgent sustainability, pharmaceutical and other challenges while giving a glimpse of the advances that will become possible with quantum computing.

“We think there’s an opportunity to do this across a number of scientific fields,” says Brian Abrahamson, the chief digital officer at PNNL. “Recent technology advancements have opened up the opportunity to accelerate scientific discovery.”

PNNL is a U.S. Department of Energy laboratory doing research in several areas, including chemistry and materials science, and its objectives include energy security and sustainability. That made it the ideal collaborator with Microsoft to leverage advanced AI models to discover new battery material candidates.

“The development of novel batteries is an incredibly important global challenge,” Abrahamson says. “It has been a labor-intensive process. Synthesizing and testing materials at a human scale is fundamentally limiting.”

Learning through trial and error

The traditional first step of materials synthesis is to read all the published studies of other materials and hypothesize how different approaches might work out. “But one of the main challenges is that people publish their success stories, not their failure stories,” says Vijay Murugesan, materials sciences group lead at PNNL. That means scientists rarely benefit from learning from each other’s failures.

The next traditional scientific step is testing the hypotheses, typically a long, iterative process. “If it’s a failure, we go back to the drawing board again,” Murugesan says. One of his previous projects at PNNL, a vanadium redox flow battery technology , required several years to solve a problem and design a new material.

hpc research

The traditional method requires looking at how to improve on what has been done in the past. Another approach would be to take all the possibilities and, through elimination, find something new. Designing new materials requires a lot of calculations, and chemistry is likely to be among the first applications of quantum computing. Azure Quantum Elements offers a cloud computing system designed for chemistry and materials science research with an eye toward eventual quantum computing, and is already working on these kinds of models, tools and workflows. These models will be improved for future quantum computers, but they are already proving useful for advancing scientific discovery using traditional computers.

To evaluate its progress in the real world, the Microsoft Quantum team focused on something ubiquitous in our lives – materials for batteries.

Teaching materials science to AI

Microsoft first trained different AI systems to do sophisticated evaluations of all the workable elements and to suggest combinations. The algorithm proposed 32 million candidates – like finding a needle in a haystack. Next, the AI system found all the materials that were stable. Another AI tool filtered out candidate molecules based on their reactivity, and another based on their potential to conduct energy.

The idea isn’t to find every single possible needle in the hypothetical haystack, but to find most of the good ones. Microsoft’s AI technology whittled the 32 million candidates down to about 500,000 mostly new stable materials, then down to 800.

“At every step of the simulation where I had to run a quantum chemistry calculation, instead I’m calling the machine learning model. So I still get the insight and the detailed observations that come from running the simulation, but the simulation can be up to half a million times faster,” says Nathan Baker, Product Leader for Azure Quantum Elements.

AI may be fast, but it isn’t perfectly accurate. The next set of filters used HPC, which provides high accuracy but uses a lot of computing power. That makes it a good tool for a smaller set of candidate materials. The first HPC verification used density functional theory to calculate the energy of each material relative to all the other states it could be in. Then came molecular dynamics simulations that combined AI and HPC to analyze the movements of atoms and molecules inside each material.

This process culled the list to 150 candidates. Finally, Microsoft scientists used HPC to evaluate the practicality of each material – availability, cost and such – to trim the list to 23 – five of which were already known.

Thanks to this AI-HPC combination, discovering the most promising material candidates took just 80 hours.

The HPC portion accounted for 10 percent of the time spent computing – and that was on an already-targeted set of molecules. This intense computing is the bottleneck, even at universities and research institutions that have supercomputers, which not only are not tailored to a specific domain but also are shared, so researchers may have to wait their turn. Microsoft’s cloud-based AI tools relieve this situation.

Broad applications and accessibility

Microsoft scientists used AI to do the vast majority of the winnowing, accounting for about 90 percent of the computational time spent. PNNL materials scientists then vetted the short list down to half a dozen candidate materials. Because Microsoft’s AI tools are trained for chemistry, not just battery systems, they can be used for any kind of materials research, and the cloud is always accessible.

“We think the cloud is a tremendous resource in improving the accessibility to research communities,” Abrahamson says.

hpc research

Today, Microsoft supports a chemistry-specific copilot and AI tools that together act like a magnet that pulls possible needles out of the haystack, trimming the number of candidates for further exploration so scientists know where to focus. “The vision we are working toward is generative materials where I can ask for list of new battery compounds with my desired attributes,” Baker says.

The hands-on stage is where the project stands now. The material has been successfully synthesized and turned into prototype batteries that are functional and will undergo multiple tests in the lab. Making the material at this point, before it’s commercialized, is artisanal. One of the first steps is to take solid precursors of the materials and to grind them by hand with a mortar and pestle, explains Shannon Lee, a PNNL materials scientist. She then uses a hydraulic press to compact the material into a dime-shaped pellet. It goes into a vacuum tube and is heated to 450 to 650 degrees Celsius (842 to 1202 degrees Fahrenheit), transferred to a box to keep it away from oxygen or water, and then ground into a powder for analysis.

For this material, the 10-or-more-hour process is “relatively quick,” Lee says. “Sometimes it takes a week or two weeks to make a single material.”

Then hundreds of working batteries must be tested, over thousands of different charging cycles and other conditions, and later different battery shapes and sizes to realize commercial use. Murugesan dreams of the development of a digital twin for chemistry or materials, “so you don’t need to go to a lab and put this material together and make a battery and test it. You can say, ‘this is my anode and this is my cathode and that’s the electrolyte and this is how much voltage I’m going to apply,’ and then it can predict how everything will work together. Even details like, after 10,000 cycles and five years of usage, the material performance will be like this.”

Microsoft is already working on digital tools to speed up the other parts of the scientific process.

The lengthy traditional process is illustrated by lithium-ion batteries. Lithium got attention as a battery component in the early 1900s, but rechargeable lithium-ion batteries didn’t hit the market until the 1990s.

Today, lithium-ion batteries increasingly run our world, from phones to medical devices to electric vehicles to satellites. Lithium demand is expected to rise five to ten times by 2030, according to the U.S. Department of Energy. Lithium is already relatively scarce, and thus expensive. Mining it is environmentally and geopolitically problematic. Traditional lithium-ion batteries also pose safety issues, with the potential to catch fire or explode.

Many researchers are looking for alternatives, both for lithium and for the materials used as electrolytes. Solid-state electrolytes show promise for their stability and safety.

Surprising results

The newly discovered material PNNL scientists are currently testing uses both lithium and sodium, as well as some other elements, thus reducing the lithium content considerably – possibly by as much as 70 percent. It is still early in the process – the exact chemistry is subject to optimization and might not work out when tested at larger scale, Abrahamson cautions. He points out that the story here is not about this particular battery material, but rather the speed at which a material was identified. The scientists say the exercise itself is immensely valuable, and it has revealed some surprises.

The AI-derived material is a solid-state electrolyte. Ions shuttle back and forth through the electrolyte, between the cathode and the anode, ideally with minimal resistance.

Test tubes contain samples of the new material, which looks like fine white salt.

It was thought that sodium ions and lithium ions couldn’t be used together in a solid-state electrolyte system because they are similarly charged but have different sizes. It was assumed that the structural framework of a solid-state electrolyte material couldn’t support the movement of two different ions. But after testing, Murugesan says, “we found that the sodium and lithium ions seem to help each other.”

The new material has a bonus, Baker says, because its molecular structure naturally has built-in channels that help both ions move through the electrolyte.

Work on the new material is in early stages but “irrespective of whether it’s a viable battery in the long run, the speed at which we found a workable battery chemistry is pretty compelling,” Abrahamson says.

Additional discoveries are still possible. Murugesan and his team have yet to make and test most of the other new material candidates that the Microsoft models suggested. The collaboration continues, with PNNL computational chemists learning to use the new tools, including a copilot trained on chemistry and other scientific publications.

“With Microsoft and PNNL, this is an enduring collaboration to accelerate scientific discovery, bringing the power of these computational paradigm shifts to bear, with the chemistry and material science that are a hallmark strength of the Pacific Northwest National Laboratory,” Abrahamson says.

“We’re sitting on the precipice of this maturation of the artificial intelligence models, the computational power needed to train and make them useful, and the ability to train them on specific scientific domains with specific intelligence,” he adds. “That, we believe, is going to usher in a new era of acceleration. That is exciting, because these problems matter to the world.”

Related links:

  • Read Unlocking a new era for scientific discovery with AI: How Microsoft’s AI screened over 32 million  candidates to find a better battery
  • Read Azure Quantum Elements aims to compress 250 years of chemistry into the next 25
  • Learn more about Azure Quantum Elements
  • Read:   PNNL-Microsoft Collaboration: Accelerating Scientific Discovery
  • Read the PNNL press release: Energy Storage, Materials Discovery Kick-Off Three-Year Collaboration with Microsoft

Top image: Dan Thien Nguyen, a PNNL materials scientist, assembles a coin cell with the synthesized solid electrolyte. With AI tools guiding researchers, synthesis and testing can be focused in the right direction toward better materials for particular applications. Photo by Dan DeLong for Microsoft.

SBU News

New HPC to Advance Research Capacities Across Several Fields

Iacs group hpc

Stony Brook is the first academic institution in the country to set up this new HPC solution

Stony Brook University will soon deploy a new high-performance computing (HPC) system built using new technologies launched this year by Hewlett Packard Enterprise (HPE) and Intel. This HPC solution is designed to advance science and engineering research capacities across an array of multidisciplinary fields, including engineering, physics, the social sciences and bioscience.

The new solution, to be co-managed by the Institute for Advanced Computational Science (IACS) and the Division of Information Technology (DoIT) , will continue to help transform Stony Brook’s current computing environment with faster performance.

For more than a decade, the IACS has ramped up supercomputing capacity available to faculty and students for their academic and research endeavors, including collaborative work with other institutions and industry. Most recently, IACS  installed a supercomputer called Ookami  in 2020.

Stony Brook is the first academic institution in the United States to set up this new HPC solution that uses the Intel Xeon CPU Max series on HPE ProLiant servers. The solution provides an enhancement and refresh to Stony Brook’s powerful Seawulf computational cluster .

Funding for the new Seawulf was provided by the National Science Foundation (NSF Major Research Instrumentation award 2215987), matching funds from Empire State Development’s Division of Science, Technology and Innovation (NYSTAR) program (contract C210148), plus crucial funding from SBU’s President, Provost, Vice President for Research, CIO, and the chair of the Department of Materials Science and Chemical Engineering. Additional funding was provided by the Deans of the College of Arts and Sciences (CAS), the College of Engineering and Applied Sciences (CEAS), the School of Marine and Atmospheric Sciences (SoMAS), and the IACS, without whose leadership, vision, and financial support this would not have been possible.

“I am excited by what this new generation of computer processors promises, as some of the work may be up to eight times faster with this new HPC solution compared to our current SeaWulf cluster,” said Robert J. Harrison , founding endowed director of the IACS. “Stony Brook’s leadership of the NSF Ookami project and our partnership with HPE were instrumental in positioning us to successfully become the first university to use this revolutionary technology.”

Iacs group hpc

The new solution is built using HPE ProLiant DL360 Gen11 servers, which deliver trusted security by design and optimized performance to support a range of scientific workloads involving modeling, simulation, AI and analytics.

HPE has also designed the solution with fourth Gen Intel Xeon scalable processors to boost computational performance and features the Intel Xeon CPU Max Series to offer an order of magnitude of more memory bandwidth per socket, which is significantly higher than other x86 server systems. The increased bandwidth delivered by the Max Series CPU and other architecture enhancements in fourth Gen Xeon offer computational scientists new technology to advance their research.

The HPE ProLiant DL360 Gen11 servers will also improve cost savings and reduce the data center footprint for the university with a closed-loop liquid cooling capability. The cooling solution, which requires fewer racks to maximize space, provides alternative coolant solutions that do not require additional plumbing. Instead, it removes heat from various system components and transfers it to coolant tubes to cool down the system.

Fostering inclusion and fueling HPC innovation through partnerships

The project will partner with Stony Brook’s Women in Science and Engineering program and the Center for Inclusive Education to provide campus-wide access to resources and training to better facilitate engagement in computational science by students from diverse backgrounds and disciplines.

In addition to leveraging technologies from HPE and Intel, the new HPC and AI solution was deployed by ComnetCo, HPE’s HPC-focused solution provider and award-winning public sector partner.

ComnetCo has a longstanding collaboration with Stony Brook, and for the past 25 years, has extended its expertise in delivering supercomputing solutions to scientific and technical computing communities within higher education and research institutions.

“The new HPC system will be a boon for the whole campus. It will greatly enhance multi-scale, multi-physics applications like astrophysics,” said Alan Calder , a professor in the Department of Physics and Astronomy and deputy director of the IACS. “Not only will models be able to include more physics and thus have unprecedented realism, but the system’s capacity will allow for performing suites of simulations for meaningful assessment of the uncertainty of the results.”

Dilip Gersappe , professor and chair of the Department of Materials Science and Chemical Engineering, points out that the HPC solution can help bring his research on soils to a new level.

“The new cluster enables us to simulate the thaw/freeze cycle in soils in the Arctic region, a region where climate change effects and temperature fluctuations are having profound effects,” he explained.

The new solution is expected to be in production this summer and in operation sometime during the first semester of the 2023-24 academic year.

Related Posts

Stat mad 23 new

Add comment

Cancel reply.

Your Website

Save my name, email, and website in this browser for the next time I comment.

This site uses Akismet to reduce spam. Learn how your comment data is processed .

Chapman lab 1

Webcam Chemistry: An Unexpected Research Tool

Researchers did in Stony Brook University chemistry professor Karena Chapman’s lab used webcams to identify breakthroughs in chemical research. Learn more.

Game competition 2024 1

SBU’s Annual Game Programming Competition Turns 20

Stony Brook’s annual Game Programming Competition, presented by the Department of Computer Science, recently celebrated its 20th anniversary.

Giving day 2024 feat

Hold Doors Open on Sixth Annual Giving Day March 27

The Stony Brook University community will be challenged to open doors for Seawolves everywhere at the sixth annual Giving Day on Wednesday, March 27.

Search SBU News

Subscribe to newsletter, latest stories.

Art tank

Calling All Student Artists: Be a Part of ART Tank

Concorso 2024 1

Italian Auto Enthusiasts Drive to SBU for the 2024 Concorso d’Eleganza Car Show

Congressman lalota group photo

Congressman Nick LaLota Joins Celebration of Successful SBDC Clients

Car free day long island

Join Stony Brook for Car Free Long Island Day, Sept. 22

Hochul banner 8

AI Town Hall Details New Innovation Institute, Looks to the Future

Kingsberg writers room

SBU MFA in TV Writing Launches Groundbreaking TV Pilot Incubator

Jacob thomas

Jacob Thomas ’73 Establishes Endowed CEAS Scholarship

Dragon boat 24 7

Seawolves Set Sail at 10th Annual Dragon Boat Race Festival

Classroom tech 1

Technology Upgrades Are Enhancing the SBU Learning Experience

Niki pita feature

Stony Brook Surgery 50th Anniversary: Giving a Teen Boxer from Greece a Fighting Chance

Science on stage

Science on Stage: Climate Edition Coming to Staller Center, Oct. 28

Mcdonald house render

Get Ready to Run for the Ronald McDonald House at Stony Brook

Women student

Stony Brook University, NY Women in Communications Form Partnership

Rachel alexandre 2

Rachel Alexandre Awarded Voyager Scholarship for Public Service

Celt winners 2024

Honoring the 2024 CELT Celebration of Teaching Award Winners

Stony Brook University Logo

  • Find Stories
  • Media Resources
  • Media Relations Team
  • Press Clip Archives
  • Press Release Archives

Sign Up Today!

Connect with sbu.

Sb matters masthead white

© 2024 Stony Brook University

Subscribe to News

  • Calling All Student Artists: Be a Part of ART Tank September 19, 2024
  • Italian Auto Enthusiasts Drive to SBU for the 2024 Concorso d’Eleganza Car Show September 19, 2024
  • Congressman Nick LaLota Joins Celebration of Successful SBDC Clients September 19, 2024
  • Join Stony Brook for Car Free Long Island Day, Sept. 22 September 19, 2024
  • AI Town Hall Details New Innovation Institute, Looks to the Future September 18, 2024
  • Alumni News
  • Arts & Entertainment
  • Awards and Honors
  • College of Arts & Sciences
  • College of Business
  • College of Engineering & Applied Sciences
  • Commencement
  • Faculty/Staff
  • Graduate School
  • Long Island
  • School of Communication and Journalism
  • School of Dental Medicine
  • School of Health Professions
  • School of Medicine
  • School of Nursing
  • School of Pharmacy
  • School of Professional Development
  • School of Social Welfare
  • Student Spotlight
  • Sustainability
  • Stay Informed

Get the latest word on Stony Brook news, discoveries and people.

High Performance Computing

High Performance Computing provides supercomputer access and supporting software for researchers who need powerful processing resources. This includes the Greene supercomputer, one of the fastest HPC resources in higher education.

High Performance Computing (NYU IT)

Hpc research project space, research cloud.

NYU IT Service Desk is available 24x7. Please email if not urgent.

Knowledge Base

[email protected]

Global Contact Info

NYU IT Feedback

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

The PMC website is updating on October 15, 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • IEEE - PMC COVID-19 Collection

Logo of pheieee

The COVID-19 High-Performance Computing Consortium

Lawrence Livermore National Laboratory Livermore CA 94550 USA

Nancy Campbell

IBM Research Yorktown Heights NY 10598 USA

Barbara Helland

Department of Energy Washington DC 20585 USA

Manish Parashar

University of Utah Salt Lake City UT 84112 USA

Michael Rosenfield

James sexton.

University of Illinois Urbana IL 61801 USA

In March of 2020, recognizing the potential of High Performance Computing (HPC) to accelerate understanding and the pace of scientific discovery in the fight to stop COVID-19, the HPC community assembled the largest collection of worldwide HPC resources to enable COVID-19 researchers worldwide to advance their critical efforts. Amazingly, the COVID-19 HPC Consortium was formed within one week through the joint effort of the Office of Science and Technology Policy (OSTP), the U.S. Department of Energy (DOE), the National Science Foundation (NSF), and IBM to create a unique public–private partnership between government, industry, and academic leaders. This article is the Consortium's story–how the Consortium was created, its founding members, what it provides, how it works, and its accomplishments. We will reflect on the lessons learned from the creation and operation of the Consortium and describe how the features of the Consortium could be sustained as a National Strategic Computing Reserve to ensure the nation is prepared for future crises.

In March of 2020, recognizing the potential of High-Performance Computing (HPC) to accelerate understanding and the pace of scientific discovery in the fight to stop COVID-19, the HPC community assembled the largest collection of worldwide HPC resources to enable COVID-19 researchers worldwide to advance their critical efforts. Amazingly, the COVID-19 HPC Consortium was formed within one week through the joint effort of the Office of Science and Technology Policy (OSTP), the U.S. Department of Energy (DOE), the National Science Foundation (NSF), and IBM. The Consortium created a unique public–private partnership between government, industry, and academic leaders to provide access to advanced HPC and cloud computing systems and data resources, along with critical associated technical expertise and support, at no cost to researchers in the fight against COVID-19. The Consortium created a single point of access for COVID researchers. This article is the Consortium's story—how the Consortium was created, its founding members, what it provides, how it works, and its accomplishments. We will reflect on the lessons learned from the creation and operation of the Consortium and describe how the features of the Consortium could be sustained as a National Strategic Computing Reserve (NSCR) to ensure the nation is prepared for future crises.

Creation of the Consortium

As the pandemic began to significantly accelerate in the United States, on March 11 and 12, 2020, IBM and the HPC community started to explore ways to organize efforts to help in the fight against COVID-19. IBM had years of experience with HPC, knew its capabilities to help solve hard problems, and had the vision of organizing the HPC community to leverage its substantial computing capabilities and resources to accelerate progress and understanding in the fight against COVID-19 by connecting COVID-19 researchers with organizations that had significant HPC resources. At this point in the pandemic, the efforts in the DOE, NSF, and other organizations within the U.S. Government, as well as around the world, were independent and ad hoc in nature.

It was clear very early on that a broader and more coordinated effort was needed to leverage existing efforts and relationships to create a unique HPC collaboration.

Early in the week of March 15, 2020, leadership at the DOE Labs and at key academic institutions were supportive of the vision: very quickly create a public–private consortium between government, industry, and academic leaders to aggregate compute time and resources on their supercomputers and to make them freely available to aid in the battle against the virus. On March 17, the White House OSTP began to actively support the creation of the Consortium, along with DOE and NSF leadership. The NSF recommended leveraging their Extreme Science and Engineering Discovery Environment (XSEDE) Project 1 and its XSEDE Resource Allocations System (XRAS) that handles nearly 2000 allocation requests annually 2 to serve as the access point for the proposals. Recognizing that time was critical, a team, now comprising IBM, DOE, OSTP, and NSF, had been formed with the goal of creating the Consortium in less than a week! Remarkably, the Consortium met that goal without formal legal agreements. Essentially, all potential members agreed to a simple statement of intent that they would provide their computing facilities’ capabilities and expertise at no cost to COVID-19 researchers, that all parties in this effort would be participating at risk and without liability to each other, and without any intent to influence or otherwise restrict one another.

From the beginning, it was recognized that communication and expedient creation of a community around the Consortium would be key. Work began on the Consortium website a the following day. The Consortium Executive Committee was formed to lay the groundwork for the operations of the Consortium. By Sunday, March 22, the XSEDE Team instantiated a complete proposal submission and review process that was hosted under the XSEDE website b and provided direct access to the XRAS submission system, which was ready to accept proposal submissions the very next day.

Luckily, the Consortium assembled swiftly because OSTP announced that the President would introduce the concept of the Consortium at a news conference on March 22. Numerous news articles came out after the announcement that evening. The Consortium became a reality when the website c went live the next day, followed by additional press releases and news articles . The researchers were ready—the first proposal was submitted on March 24, and the first project was started on March 26, demonstrating our ability to connect researchers with resources in a matter of days—an exceptionally short time for such processes typically. Subsequently, 50 proposals were submitted by April 15 and 100 by May 9.

A more detailed description of the Consortium's creation can be found in the IEEE Computer Society Digital Library at https://doi.ieeecomputersociety.org/10.1109/MCSE.2022.3145608 . An extended version of this article can be found on the Consortium website. a

Consortium Members and Capabilities

The Consortium initially provided access to over 300 petaflops of supercomputing capacity provided by the founding members: IBM; Amazon Web Services; Google Cloud; Microsoft; MIT; RPI; DOE's Argonne, Lawrence Livermore, Los Alamos, Oak Ridge, and Sandia National Laboratories; NSF and its supported advanced computing resources, advanced cyberinfrastructure, services, and expertise; and NASA.

Within several months, the Consortium grew to 43 members (see Figure 1 ) from the United States, and around the world (the complete list can be found at https://covid19-hpc-consortium.org/ ) representing access to over 600 petaflops of supercomputing systems, over 165,000 compute nodes, more than 6.8 million compute processor cores, and over 50,000 GPUs, representing access to systems worth billions of dollars. In addition, the Consortium collaborated with two other worldwide initiatives: The EU PRACE COVID-19 Initiative and a COVID-19 initiative at the National Computational Infrastructure Australia and Pawsey Supercomputing Centre. d The Consortium also added nine affiliates (also listed and described at websites a , c ) who provided expertise and supporting services to enable researchers to start up quickly and run more efficiently.

An external file that holds a picture, illustration, etc.
Object name is rosen1-3145608.jpg

Consortium members and affiliates as of July 7, 2021.

Governance and Operations

Even though there were no formal agreements between the Consortium members, an agile governance model was developed as shown in Figure 2 . An Executive Board, comprised of a subset of the founding members, oversees all aspects of the Consortium and is the final decision-making authority. Initially, the Executive Board met weekly and now meets monthly. The Board reviews progress, reviews recommendations for new members and affiliates, and provides guidance on future directions and activities of the Consortium to the Executive Committee. The Science and Computing Executive Committee, which reports to the Executive Board, (see also Figure 2 ) is responsible for day-to-day operations of the Consortium, overseeing the review and computer matching process, tracking project progress, maintaining/updating the website, highlighting the Consortium results (for example, with blogs and webinars), and determining/proposing next steps for Consortium activities.

An external file that holds a picture, illustration, etc.
Object name is rosen2-3145608.jpg

Consortium organizational structure as of July 7, 2021.

The Scientific Review and the Computing Matching Sub-Committees play a crucial role in the success of the Consortium. The Scientific Review team—comprised of subject matter experts from members of the research community and coming from many organizations e —reviews proposals for merit based on the review criteria and guidance b provided to proposers, and recommends appropriate proposals to the Computing Matching Sub-Committee. The Computing Matching Sub-Committee team, comprised of representatives of Consortium members providing resources, matches the computer needs from recommended proposals with either the proposer's requested site or other appropriate resources. Once matched, the researcher needs to go through the standard onboarding/approval process at the host site to gain access to the system. Initially, we expected that the onboarding/approval process would be time consuming (since this was the only time where actual agreements had to be signed), but those executing the onboarding processes with the various member compute providers worked diligently to prioritize these requests, and thus, it typically takes only a day or two. As a result, once approved, projects are up and running very rapidly.

The Membership Committee reviews requests for organizations and individuals to become members or affiliates to provide additional resources to the Consortium. These requests are in turn sent to OSTP for vetting, with the Executive Committee making final recommendations to the Executive Board for approval.

Project Highlights

The goal of the Consortium is to provide state-of-the-art HPC resources to scientists all over the world to accelerate and enable R&D that can contribute to pandemic response. Over 115 projects have been supported, covering a broad spectrum of technical areas ranging from understanding the SARS-CoV-2 virus and its human interaction to optimizing medical supply chains and resource allocations, and have been organized into a taxonomy of areas consisting of basic science, therapeutic development, and patients.

Consortium projects have produced a broad range of scientific advances. The projects have collectively produced a growing number of publications, datasets, and other products (more than 70 as of the end of calendar year 2021), including two journal covers. f A more detailed description of the Consortium's Project Highlights and Operational Results can be found at https://covid19-hpc-consortium/projects and https://covid19-hpc-consortium.org/blog , respectively.

While Consortium projects have contributed significantly to scientific understanding of the virus and its potential therapeutics, direct and near-term impact on the course of the pandemic has been mixed. There are cases of significant impact, but, overall, the patient-related applications that have the most direct path to near-term impact have been less successful. It may be possible to attribute this to the lower level of experience in HPC that is typical of these groups, but patient data availability and use restrictions and the lack of connection to front-line medical and response efforts are also important factors. These are issues that will need to be addressed in planning for future pandemics or other crisis response programs.

Lessons Learned From the COVID-19 HPC Consortium

The COVID-19 pandemic has shown that the existence of an advanced computing infrastructure is not sufficient on its own to effectively support the national and international response to a crisis. There must also be mechanisms in place to rapidly make this infrastructure broadly accessible, which includes not only the computing systems themselves, but also the human expertise, software, and relevant data to rapidly enable a comprehensive and effective response.

The following are the key lessons learned.

  • • The ability to leverage existing processes and tools (e.g., XSEDE) was critical and should be considered for future responses.
  • • Engagement with the stakeholder community is an area that should be improved based on the COVID-19 experience. For example, early collaboration with the NIH, FEMA, CDC, and medical provider community could have significantly increased impact in the patient care and epidemiology areas. Having prenegotiated agreements with these and similar stakeholders will be important going forward.
  • • Substantial time and effort are required to make resources and services available to researchers so that they can do their work. A standing capability to support the proposal submission and review process, as well as coordinating with service providers to provide the necessary access to resources and services, would have been helpful.
  • • It would have been beneficial to have had use authorizations in place for the supercomputers and resources provided by U.S. Government organizations.
  • • While the proposal review and award process ran sufficiently well, there was no integration of the resources being provided and the associated institutions into an accounting and account management system. Though XSEDE also operates such a system, there was no time to integrate the resources into that system. This would have greatly facilitated the matching and onboarding processes. It also would have provided usage data and insight into resource utilization.
  • • Given the absence of formal operating and partnership agreements in the Consortium and the mix of public and private computing resources, the work supported was limited to open, publishable activities. This inability to support proprietary work likely reduced the effectiveness and impact of the Consortium, particularly in support for private-sector work on therapeutics and patient care. A lightweight framework for supporting proprietary work and associated intellectual property requirements would increase the utility of responses for similar future crises.

Next Step: The NSCR

Increasingly, the nation's advanced computing infrastructure—and access to this infrastructure, along with critical scientific and technical support in times of crisis—is important to the nation's safety and security. g , h Computing is playing an important role in addressing the COVID-19 pandemic and has, similarly, assisted in national emergencies of the recent past, from hurricanes, earthquakes, and oil spills, to pandemics, wildfires, and even rapid turnaround modeling when space missions have been in jeopardy. To improve the effectiveness and timeliness of these responses, we should draw on the experience and the lessons learned from the Consortium in developing an organized and sustainable approach for applying the nation's computing capability to future national needs.

We agree with the rationale behind the creation of an NSCR as outlined in the recently published OSTP Blueprint to protect our national safety and security by establishing a new public–private partnership, the NSCR: a coalition of experts and resource providers (compute, software, data, and technical expertise) spanning government, academia, nonprofits/foundations, and industry supported by appropriate coordination structures and mechanisms that can be mobilized quickly and efficiently to provide critical computing capabilities and services in times of urgent needs.

Figure 3 shows a transition from a pre-COVID ad hoc response to crises to the Consortium and then to an NSCR. i

An external file that holds a picture, illustration, etc.
Object name is rosen3-3145608.jpg

Potential path from a pre-COVID to the NSCR.

Principal Functions of the NSCR

In much the same way as the Merchant Marine j maintains a set of “ready reserve” resources that can be put to use in wartime, the NSCR would maintain reserve computing capabilities for urgent national needs. Like the Merchant Marine, this effort would involve building and maintaining sufficient infrastructure and human capabilities, while also ensuring that these capabilities are organized, trained, and ready in the event of activation. The principal functions of the NSCR are proposed to be as follows:

  • • recruit and sustain a group of advanced computing and data resource and service provider members in government, industry, and academia;
  • • develop relevant agreements with members, including provisions for augmented capacity or cost reimbursement for deployable resources, for the urgent deployment of computing and supporting resources and services, and for provision of incentives for nonemergency participation;
  • • develop a set of agreements to enable the Reserve to collaborate with domain agencies and industries in preparation for and execution of Reserve deployments;
  • • execute a series of preparedness exercises on some frequency basis to test and maintain the Reserve;
  • • establish processes and procedures for activating and operating the national computing reserve in times of crisis;
  • ˆ execute procedures to review and prioritize projects and to allocate computing resources to approved projects;
  • ˆ track project progress and disseminate products and outputs to ensure effective use and impact;
  • ˆ participate in the broader national response as an active partner.

The COVID-19 HPC Consortium has been in operation for almost two years k and has enabled over 115 research projects investigating multiple aspects of COVID-19 and the SARS-CoV-2 coronavirus. To maximize impact going forward, the Consortium has transitioned to a focus on the following:

  • 1) proposals in specific targeted areas;
  • 2) gathering and socializing results from current projects;
  • 3) driving the establishment of an NSCR.

New project focus areas target having an impact in a six-month time period and the Consortium is particularly, though not exclusively, interested in projects focused on understanding and modeling patient response to the virus using large clinical datasets; learning and validating vaccine response models from multiple clinical trials; evaluating combination therapies using repurposed molecules; mutation understanding and mitigation methods; and epidemiological models driven by large multimodal datasets.

We have drawn on our experience and lessons learned through the COVID-19 HPC Consortium, and on our observation of how the scientific community, federal agencies, and healthcare professionals came together in short order to allow computing to play an important role in addressing the COVID-19 pandemic. We have also proposed a possible path forward, the NSCR, for being better prepared to respond to future national emergencies that require urgent computing, ranging from hurricanes and earthquakes to pandemics and wildfires. Increasingly, the nation's computing infrastructure—and access to this infrastructure along with critical scientific and technical support in times of crisis—is important to the nation's safety and security, and its response to natural disasters, public health emergencies, and other crises.

Acknowledgments

The authors would like to thank the past and present members of the Consortium Executive Board for their guidance and leadership. In addition, the authors would like to thank Jake Taylor and Michael Kratsios formerly from OSTP, Dario Gil from IBM, and Paul Dabbar formerly from DOE, for their key roles in helping make the creation and operation of the Consortium possible. The authors also would like to thank Corey Stambaugh from OSTP for his leadership role on the Consortium membership committee. Furthermore, the authors would also like to thank all the members and affiliate organizations from academia, government, and industry who contributed countless hours of their time along with their compute resources. In addition, the service provided by researchers across many institutions as scientific reviewers are critical is selecting appropriate projects and their time and efforts are greatly appreciated, and, of course, they also want to thank the many researchers who did such outstanding work, leveraging the Consortium, in the fight against COVID-19.

Biographies

Jim Brase is currently a Deputy Associate Director for Computing with Lawrence Livermore National Laboratory (LLNL), Livermore, CA, USA. He leads LLNL research in the application of high-performance computing, large-scale data science, and simulation to a broad range of national security and science missions. Jim is a co-lead of the ATOM Consortium for computational acceleration of drug discovery, and on the leadership team of the COVID-19 HPC Consortium. He is currently leading efforts on large-scale computing for life science, biosecurity, and nuclear security applications. In his previous position as an LLNL's deputy program director for intelligence, he led efforts in intelligence and cybersecurity R&D. His research interests focus on the intersection of machine learning, simulation, and HPC. Contact him at [email protected].

Nancy Campbell is responsible for the coordinated execution of the IBM Research Director's government engagement agenda and resulting strategic partnerships within and across industry, academia, and government, including the COVID-19 HPC Consortium and International Science Reserve. Prior to this role, she was the program director for IBM's COVID-19 Technology Task Force, responsible for developing and delivering technology-based solutions to address the consequences of COVID-19 for IBM's employees, clients, and society-at-large. Previously, she led large multidisciplinary teams in closing IBM's two largest software divestitures for an aggregate value in excess of $2.3 billion, and numerous strategic intellectual property partnerships for an aggregate value in excess of $3 billion. Prior to joining IBM, she was a CEO for one of Selby Venture Partners portfolio companies and facilitated the successful sale of that company to its largest channel partner. She attended the University of Southern California, Los Angeles, CA, USA, and serves as an IBM's executive sponsor for the USC Master of Business for Veterans program. Contact her at [email protected].

Barbara Helland is currently an Associate Director of the Office of Science's Advanced Scientific Computing Research (ASCR) program. In addition to her associate director duties, she is leading the development of the Department's Exascale Computing Initiative to deliver a capable exascale system by 2021. She was also an executive director of the COVID-19 High-Performance Computing Consortium since its inception in March, 2020. She previously was an ASCR's facilities division director. She was also responsible for the opening ASCR's facilities to national researchers, including those in industry, through the expansion of the Department's Innovative and Novel Computational Impact on Theory and Experiment program. Prior to DOE, she developed and managed computational science educational programs at Krell Institute, Ames, IA, USA. She also spent 25 years at Ames Laboratory working closely with nuclear physicists and physical chemists to develop real-time operating systems and software tools to automate experimental data collection and analysis, and in the deployment and management of lab-wide computational resources. Helland received the B.S. degree in computer science and the M.Ed. degree in organizational learning and human resource development from Iowa State University, Ames. In recognition for her work on the Exascale Computing Initiative and with the COVID-19 HPC Consortium, she was named to the 2021 Agile 50 list of the world's 50 most influential people navigating disruption. Contact her at [email protected].

Thuc Hoang is currently the Director of the Office of Advanced Simulation and Computing (ASC), and Institutional Research and Development Programs in the Office of Defense Programs, within the DOE National Nuclear Security Administration (NNSA), Washington, DC, USA. The ASC program develops and deploys high-performance simulation capabilities and computational resources to support the NNSA annual stockpile assessment and certification process, and other nuclear security missions. She manages ASC's research, development, acquisition and operation of HPC systems, in addition to the NNSA Exascale Computing Initiative and future computing technology portfolio. She was on proposal review panels and advisory committees for the NSF, Department of Defense, and DOE Office of Science, as well as for some other international HPC programs. Hoang received the B.S. degree from Virginia Tech, Blacksburg, VA, USA, and the M.S. degree from Johns Hopkins University, Baltimore, MD, USA, both in electrical engineering. Contact her at [email protected].

Manish Parashar is the Director of the Scientific Computing and Imaging (SCI) Institute, the Chair in Computational Science and Engineering, and a Professor with the School of Computing, University of Utah, Salt Lake City, UT, USA. He is currently on an IPA appointment at the National Science Foundation where he is serving as the Office Director of the NSF Office of Advanced Cyberinfrastructure. He is the Founding Chair of the IEEE Technical Consortium on High Performance Computing, the Editor-in-Chief of IEEE Transactions on Parallel and Distributed Systems , and serves on the editorial boards and organizing committees of several journals and international conferences and workshops. He is a Fellow of AAAS, ACM, and IEEE. For more information, please visit http://manishparashar.org . Contact him at [email protected].

Michael Rosenfield is currently a Vice President of strategic partnerships with the IBM Research Division, Yorktown Heights, NY, USA. Previously, he was a vice president of Data Centric Solutions, Indianapolis, IN, USA. His research interests include the development and operation of new collaborations, such as the COVID-19 HPC Consortium and the Hartree National Centre for Digital Innovation as well as future computing architectures and enabling accelerated discovery. Prior work in Data Centric Solutions included current and future system, and processor architecture and design including CORAL and exascale systems, system software, workflow performance analysis, the convergence of Big Data, AI, analytics, modeling, and simulation, and the use of these advanced systems to solve real-world problems as part of the collaboration with the Science and Technology Facility Council's Hartree Centre in the U.K. He has held several other executive-level positions in IBM Research including Director Smarter Energy, Director of VLSI Systems, and Director of the IBM Austin Research Lab. He started his career at IBM working on electron-beam lithography modeling and proximity correction techniques. Rosenfield received the B.S. degree in physics from the University of Vermont, Burlington, VT, USA, and the M.S. and Ph.D. degrees from the University of California, Berkeley, CA, USA. Contact him at [email protected].

James Sexton is currently an IBM Fellow with IBM T. J. Watson Research Center, New York, NY, USA. Prior to joining IBM, he held appointments as a lecturer, and then as a professor with Trinity College Dublin, Dublin, Ireland, and as a postdoctoral fellow with IBM T. J. Watson Research Center, at the Institute for Advanced Study at Princeton and at Fermi National Accelerator Laboratory. His research interests include span high-performance computing, computational science, and applied mathematics and analytics. Sexton received the Ph.D. degree in theoretical physics from Columbia University, New York, NY, USA. Contact him at [email protected].

John Towns is currently an Executive Associate Director for engagement with the National Center for Supercomputing Applications, and a deputy CIO for Research IT in the Office of the CIO, Champaign, IL, USA. He is also a PI and Project Director for the NSF-funded XSEDE project (the Extreme Science and Engineering Discovery Environment. He holds two appointments with the University of Illinois at Urbana-Champaign, Champaign. He provides leadership and direction in the development, deployment, and operation of advanced computing resources and services in support of a broad range of research activities. In addition, he is the founding chair of the Steering Committee of PEARC (Practice and Experience in Advanced Research Computing, New York, NY, USA. Towns received the B.S. degree from the University of Missouri-Rolla, Rolla, MO, USA, and the M.S. degree and astronomy from the University of Illinois, Ames, IL, both in physics. Contact him at [email protected].

Help | Advanced Search

Computer Science > Distributed, Parallel, and Cluster Computing

Title: hpc with enhanced user separation.

Abstract: HPC systems used for research run a wide variety of software and workflows. This software is often written or modified by users to meet the needs of their research projects, and rarely is built with security in mind. In this paper we explore several of the key techniques that MIT Lincoln Laboratory Supercomputing Center has deployed on its systems to manage the security implications of these workflows by providing enforced separation for processes, filesystem access, network traffic, and accelerators to make every user feel like they are running on a personal HPC.
Subjects: Distributed, Parallel, and Cluster Computing (cs.DC)
Cite as: [cs.DC]
  (or [cs.DC] for this version)
  Focus to learn more arXiv-issued DOI via DataCite

Submission history

Access paper:.

  • Other Formats

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

IMAGES

  1. HPC Research Center

    hpc research

  2. High Performance Computing (HPC)

    hpc research

  3. HPC Research

    hpc research

  4. What’s New in HPC Research: Reinventing HPC, ParTypes, Crossbar AI

    hpc research

  5. What is HPC?

    hpc research

  6. How HPC is pioneering ground-breaking research

    hpc research

VIDEO

  1. Alok Sharma Roast Manak Gupta

  2. Tang Lang Pu Chan

  3. A VHS Damage test

  4. 🔬 Science is the Answer #shorts

  5. Grand Combat Daily Combo || Today Combos Cards || 3rd September Card #multirolefighter #smartfighter

  6. Alfaisal University Introduction

COMMENTS

  1. Composite Cylinders for Gas Storage

    High quality composite cylinders for LPG storage and transportation. Can be placed here, for example owner of a cylinder or warnings required by laws and regulations. Special design allows to fit shrink film perfectly. Readable from any cylinder's position, contains cylinder's unique nubmer. Max size 240x120 mm.

  2. High Performance Computing

    High Performance Computing. Research in high-performance computing (HPC) aims to design practical algorithms and software that run at the absolute limits of scale and speed. It is motivated by the incredible demands of "big and hairy" data-hungry computations, like modeling the earth's atmosphere and climate, using machine learning ...

  3. What Is High-Performance Computing (HPC)?

    HPC is a technology that uses clusters of powerful processors that work in parallel to process massive, multidimensional data sets and solve complex problems at extremely high speeds. HPC solves some of today's most complex computing problems in real-time. HPC systems typically run at speeds more than one million times faster than the fastest ...

  4. High Performance Computing

    High-performance computing (HPC) - the most powerful and largest scale computing systems -- enables researchers to study systems that would otherwise be impractical, or impossible, to investigate in the real world due to their complexity or the danger they pose. For over half a century, America has led the world in HPC, thanks to sustained ...

  5. What is High-Performance Computing (HPC)?

    A high-performance computing cluster is a collection of tightly interconnected computers that work in parallel as a single system to perform large-scale computational tasks. HPC clusters are designed to provide high performance and scalability, enabling scientists, engineers, and researchers to solve complex problems that would be infeasible ...

  6. High Performance Computing (HPC) Research

    Who We Are. For more than 35 years, the industry analysts at Hyperion Research have been at the forefront of helping private and public organizations and government agencies make intelligent, fact-based decisions related to business impact and technology direction in the complex and competitive landscape of advanced computing and emerging ...

  7. High Performance and Parallel Computing

    HPC research aims to increase the performance, energy efficiency, and intelligence of today's largest scale systems and applications. Illinois Tech's strong group of researchers conducts vibrant research to open opportunities in several aspects of HPC including memory and storage systems, scalable software and data management, data ...

  8. High Performance Computing

    HPC also applies data driven and AI techniques to diagnose and solve performance challenges in HPC systems themselves. HPC research at CISE focuses on improving performance, reducing cost, and making the systems more energy efficient by applying intelligent and interdisciplinary methods. Research areas cross-cut design automation (EDA ...

  9. Technology for Research: High Performance Computing

    High Performance Computing, or HPC, encompasses a wide variety of specialized computing processes that require lots of computing power in the form of many cores, a large amount of system memory, or even many such computers connected together in a cluster. These researchers might need a computing system that can crunch very large amounts of data ...

  10. What's New in HPC Research: Reinventing HPC, ParTypes, Crossbar AI

    November 4, 2022. In this regular feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Reinventing high performance computing: challenges and opportunities. Technical and Economic Forces Reshaping ...

  11. Reinventing High Performance Computing: Challenges and Opportunities

    changes. Figure 3 illustrates some of the key events in the history of high-performance computing, along with recent systems ranked as the fastest in the world. Initially, the HPC market targeted systems clearly distinct from the mainstream, with a wide range of academic research, prototyping, and construction projects

  12. High Performance Computing

    High performance computing is necessary for supporting all aspects of data-driven research. HPC-related research includes computer architecture, systems software and middleware, networks, parallel and high performance algorithms, and programming paradigms, and run-time systems for data science. The Center for High Performance Computing Rich Vuduc, Director

  13. [2203.02544] Reinventing High Performance Computing: Challenges and

    Simply put, high-performance computing (HPC) is at an important inflection point. For the last 60 years, the world's fastest supercomputers were almost exclusively produced in the United States on behalf of scientific research in the national laboratories. Change is now in the wind.

  14. Fighting COVID-19 with HPC

    A special prize for outstanding research in high-performance computing (HPC) applied to COVID-19-related challenges sheds light on the computational science community's efforts to fight the ...

  15. Frontiers in High Performance Computing

    This innovative journal explores all aspects of high performance computing and its application as an enabler of initiatives like AI and the study of climate change. ... Start your submission and get more impact for your research by publishing with us. Author guidelines. Ready to publish? Check out our author guidelines for everything you need ...

  16. Introduction to High-Performance Computing

    The introduction of high-performance computing (HPC) has revolutionized scientific research by enabling scientists to perform complex computations that were once impossible. In this chapter, we have explored the various aspects of HPC, starting with the definition of supercomputing and its role in scientific research. We then delved into the ...

  17. Position Paper: The Landscape and Challenges of HPC Research and LLMs

    HPC is instrumental in a wide range of critical applications, from climate modeling, computational chemistry, and biomedical research to astrophysical simulations. HPC provides a framework for scalable processing and analysis of complex problems with massive datasets, which makes it a cornerstone in advancing scientific and technological frontiers.

  18. Discoveries in weeks, not years: How AI and high-performance computing

    The HPC portion accounted for 10 percent of the time spent computing - and that was on an already-targeted set of molecules. This intense computing is the bottleneck, even at universities and research institutions that have supercomputers, which not only are not tailored to a specific domain but also are shared, so researchers may have to ...

  19. High Performance Research Computing

    Posted at 04/05/2024 3:22pm. The Terra cluster will be retired in phases according to the following schedule: On May 31, we will close job scheduling and power down the compute nodes on Terra. The login and data transfer nodes will remain available for retrieving and migrating data. On June 30, the Terra cluster will be completely shutdown.

  20. New HPC to Advance Research Capacities Across Several Fields

    This HPC solution is designed to advance science and engineering research capacities across an array of multidisciplinary fields, including engineering, physics, the social sciences and bioscience. The new solution, to be co-managed by the Institute for Advanced Computational Science (IACS) and the Division of Information Technology (DoIT ...

  21. The Future of Computing Systems for AI & HPC: Applications

    Abstract. Many recent breakthroughs in AI and science were only possible due to the availability of ever-more powerful computing systems. The architecture of the systems used to run classic HPC applications at exascale and the systems training trillion-parameter AI models is converging, but the explosion of the amounts of data, the power & energy limitations, the slow-down of Moore's law ...

  22. High Performance Computing

    Research Data and Tools. High Performance Computing provides supercomputer access and supporting software for researchers who need powerful processing resources. This includes the Greene supercomputer, one of the fastest HPC resources in higher education. High Performance Computing (NYU IT)

  23. HPC

    DoD High Performance Computing Modernization Support to the Fight Against COVID-19; New HPC systems at the Air Force Research Laboratory and Navy DoD Supercomputing Research Centers will provide an additional 14 petaFLOPS of computational capability; HPCMP CREATE Team Awarded at NDIA Conference; 2018 Hero Award Winners; Maui Acquires IBM Cluster

  24. The COVID-19 High-Performance Computing Consortium

    He leads LLNL research in the application of high-performance computing, large-scale data science, and simulation to a broad range of national security and science missions. Jim is a co-lead of the ATOM Consortium for computational acceleration of drug discovery, and on the leadership team of the COVID-19 HPC Consortium.

  25. [2409.10770] HPC with Enhanced User Separation

    HPC systems used for research run a wide variety of software and workflows. This software is often written or modified by users to meet the needs of their research projects, and rarely is built with security in mind. In this paper we explore several of the key techniques that MIT Lincoln Laboratory Supercomputing Center has deployed on its systems to manage the security implications of these ...

  26. Development of sustainable HPC using rubber powder and waste wire

    This research focuses on the development of sustainable HPC by incorporating rubber powder and waste wire. To assess the mechanical and microstructural properties of the concrete, comprehensive tests were conducted, including compressive, splitting tensile and flexural strength as well as SEM and TGA tests.