SIAM PP20 – The Many Faces of Simulation for HPC Minisymposium

Minisymposium: The Many Faces of Simulation for HPC

Saturday February 15, 2020

Organizers:
Rafael Ferreira da Silva
University of Southern California, U.S.
Frédéric Suter
CNRS, France


Abstract – In the field of HPC research and development, simulation has mainly been used for the purpose of evaluating and comparing the performance of application implementations and of the algorithms therein. While this use remains critical, for good reasons, many other compelling use cases have emerged. These have often been made possible by recent advances in the simulation methodologies at the core of available simulation frameworks. Examples of new areas in which simulation has become a compelling proposition include debugging and verification, application/simulation co-design, and HPC education. In this multi-part mini-symposium, we bring together researchers who have contributed to traditional and explored emerging uses of simulation of HPC systems and applications. The objective is for them to share their experiences, present recent results, identify areas of convergence, and discuss future directions.


Session 1

https://meetings.siam.org/sess/dsp_programsess.cfm?SESSIONCODE=67786

10:40-11:00 The Many Faces of Simulation for HPC
Frédéric Suter, CNRS, France;
Rafael Ferreira da Silva, University of Southern California, U.S.

11:05-11:25 Teaching Parallel and Distributed Computing Concepts in Simulation
Henri Casanova, University of Hawaii, U.S.

11:30-11:50 Fast and Faithful Performance Prediction of MPI Applications: the HPL Case Study
Tom Cornebize, Université Grenoble Alpes, France;
Arnaud Legrand, CNRS, France;
Franz Christian Heinrich, Inria, France

11:55-12:15 Power-Aware Scheduling with Slurm: Simulation and Practice
Tapasya Patki, Lawrence Livermore National Laboratory, U.S.


Session 2

https://meetings.siam.org/sess/dsp_programsess.cfm?SESSIONCODE=67787

1:50-2:10 Faithful Performance Prediction of a Dynamic Task-Based Runtime System, an Opportunity for Task Graph Scheduling
Samuel Thibault, LaBRI, France;
Luka Stanisic, Inria Bordeaux Sud-Ouest, France;
Arnaud Legrand, CNRS, France;
Brice Videau, INRIA Grenoble Rhône-Alpes, France;
Jean-François Méhaut, Universite Joseph Fourier, France

2:15-2:35 New Horizons for Debugging Long-running Parallel Programs: DMTCP and SimGrid
Gene Cooperman and Rohan Garg, Northeastern University, U.S.

2:40-3:00 Application-simulation co-design for performance and correctness evaluation
Luigi Genovese, CEA, France;
Augustin Degomme, CEA Grenoble

3:05-3:25 To Be Defined
TBD

48 views

Continue Reading

Bridging Concepts and Practice in eScience via Simulation-driven Engineering

The CyberInfrastructure (CI) has been the object of intensive research and development in the last decade, resulting in a rich set of abstractions and interoperable software implementations that are used in production today for supporting ongoing and breakthrough scientific discoveries. A key challenge is the development of tools and application execution frameworks that are robust in current and emerging CI configurations, and that can anticipate the needs of upcoming CI applications. This paper presents WRENCH, a framework that enables simulation- driven engineering for evaluating and developing CI application execution frameworks. WRENCH provides a set of high- level simulation abstractions that serve as building blocks for developing custom simulators. These abstractions rely on the scalable and accurate simulation models that are provided by the SimGrid simulation framework. Consequently, WRENCH makes it possible to build, with minimum software development effort, simulators that that can accurately and scalably simulate a wide spectrum of large and complex CI scenarios. These simulators can then be used to evaluate and/or compare alternate platform, system, and algorithm designs, so as to drive the development of CI solutions for current and emerging applications.

Simulation-driven engineering life cycle

Reference to the paper:

  • [PDF] [DOI] R. Ferreira da Silva, H. Casanova, R. Tanaka, and F. Suter, “Bridging Concepts and Practice in eScience via Simulation-driven Engineering,” in Workshop on Bridging from Concepts to Data and Computation for eScience (BC2DC’19), 15th International Conference on eScience (eScience), 2019, p. 609–614.
    [Bibtex]
    @inproceedings{ferreiradasilva2019escience,
    title = {Bridging Concepts and Practice in eScience via Simulation-driven Engineering},
    author = {Ferreira da Silva, Rafael and Casanova, Henri and Tanaka, Ryan and Suter, Frederic},
    booktitle = {Workshop on Bridging from Concepts to Data and Computation for eScience (BC2DC'19), 15th International Conference on eScience (eScience)},
    year = {2019},
    pages = {609--614},
    doi = {10.1109/eScience.2019.00084}
    }


173 views

Continue Reading

Accurately Simulating Energy Consumption of I/O-intensive Scientific Workflows

While distributed computing infrastructures can provide infrastructure-level techniques for managing energy consumption, application-level energy consumption models have also been developed to support energy-efficient scheduling and resource provisioning algorithms. In this work, we analyze the accuracy of a widely-used application-level model that have been developed and used in the context of scientific workflow executions. To this end, we profile two production scientific workflows on a distributed platform instrumented with power meters. We then conduct an analysis of power and energy consumption measure- ments. This analysis shows that power consumption is not linearly related to CPU utilization and that I/O operations significantly impact power, and thus energy, consumption. We then propose a power consumption model that accounts for I/O operations, including the impact of wait- ing for these operations to complete, and for concurrent task executions on multi-socket, multi-core compute nodes. We implement our proposed model as part of a simulator that allows us to draw direct comparisons between real-world and modeled power and energy consumption. We find that our model has high accuracy when compared to real-world execu- tions. Furthermore, our model improves accuracy by about two orders of magnitude when compared to the traditional models used in the energy- efficient workflow scheduling literature.

Per-task power (top) and total energy (bottom) consumption measurements for the Epigenomics map task and the SoyKB haplotype caller and indel realign, as well as estimated with traditional methods (estimation) and our proposed model (wrench-*)

Reference to the paper:

  • [PDF] [DOI] R. Ferreira da Silva, A. Orgerie, H. Casanova, R. Tanaka, E. Deelman, and F. Suter, “Accurately Simulating Energy Consumption of I/O-intensive Scientific Workflows,” in Computational Science – ICCS 2019, 2019, p. 138–152.
    [Bibtex]
    @inproceedings{ferreiradasilva-iccs-2019,
    author = {Ferreira da Silva, Rafael and Orgerie, Anne-C\'{e}cile and Casanova, Henri and Tanaka, Ryan and Deelman, Ewa and Suter, Fr\'{e}d\'{e}ric},
    title = {Accurately Simulating Energy Consumption of I/O-intensive Scientific Workflows},
    booktitle = {Computational Science -- ICCS 2019},
    year = {2019},
    pages = {138--152},
    publisher = {Springer International Publishing},
    doi = {10.1007/978-3-030-22734-0_11}
    }


639 views

Continue Reading

Running Accurate, Scalable, and Reproducible Simulations of Distributed Systems with WRENCH

Scientific workflows are used routinely in numerous scientific domains, and Workflow Management Systems (WMSs) have been developed to orchestrate and optimize workflow executions on distributed platforms. WMSs are complex software systems that interact with complex software infrastructures. Most WMS research and development activities rely on empirical experiments conducted with full-fledged software stacks on actual hardware platforms. Such experiments, however, are limited to hardware and software infrastructures at hand and can be labor- and/or time-intensive. As a result, relying solely on real-world experiments impedes WMS research and development. An alternative is to conduct experiments in simulation.

In this work, we present WRENCH, a WMS simulation framework, whose objectives are (i) accurate and scalable simulations; and (ii) easy simulation software development. WRENCH achieves its first objective by building on the SimGrid framework. While SimGrid is recognized for the accuracy and scalability of its simulation models, it only provides low-level simulation abstractions and thus large software development efforts are required when implementing simulators of complex systems. WRENCH thus achieves its second objective by providing high-level and directly reusable simulation abstractions on top of SimGrid. After describing and giving rationales for WRENCH’s software architecture and APIs, we present a case study in which we apply WRENCH to simulate the Pegasus production WMS. We report on ease of implementation, simulation accuracy, and simulation scalability so as to determine to which extent WRENCH achieves its two above objectives. We also draw both qualitative and quantitative comparisons with a previously proposed workflow simulator.

Empirical cumulative distribution function of task submit times (left) and task completion times (right) for sample real-world (“pegasus”) and simulated (“wrench” and “workflowsim”) executions of Montage-2.0 on AWS-m5.xlarge.

 

Reference to the paper:

  • [PDF] [DOI] H. Casanova, S. Pandey, J. Oeth, R. Tanaka, F. Suter, and R. Ferreira da Silva, “WRENCH: A Framework for Simulating Workflow Management Systems,” in 13th Workshop on Workflows in Support of Large-Scale Science (WORKS’18), 2018, p. 74–85.
    [Bibtex]
    @inproceedings{casanova-works-2018,
    title = {{WRENCH: A Framework for Simulating Workflow Management Systems}},
    author = {Casanova, Henri and Pandey, Suraj and Oeth, James and Tanaka, Ryan and Suter, Frederic and Ferreira da Silva, Rafael},
    booktitle = {13th Workshop on Workflows in Support of Large-Scale Science (WORKS'18)},
    year = {2018},
    pages = {74--85},
    doi = {10.1109/WORKS.2018.00013}
    }

 


 

1,072 views

Continue Reading

WRENCH: Workflow Management System Simulation Workbench


Abstract – WRENCH enables novel avenues for scientific workflow use, research, development, and education. WRENCH capitalizes on recent and critical advances in the state of the art of distributed platform/application simulation. WRENCH builds on top of the open-source SimGrid simulation framework. SimGrid enables the simulation of large-scale distributed applications in a way that is accurate (via validated simulation models), scalable (low ratio of simulation time to simulated time, ability to run large simulations on a single computer with low compute, memory, and energy footprints), and expressive (ability to simulate arbitrary platform, application, and execution scenarios). WRENCH provides directly usable high-level simulation abstractions using SimGrid as a foundation. More information on https://wrench-project.org.

In a nutshell, WRENCH makes it possible to:

  • Prototype implementations of Workflow Management System (WMS) components and underlying algorithms;
  • Quickly, scalably, and accurately simulate arbitrary workflow and platform scenarios for a simulated WMS implementation; and
  • Run extensive experimental campaigns to conclusively compare workflow executions, platform architectures, and WMS algorithms and designs.

 

Reference to the paper:

  • [PDF] [DOI] H. Casanova, S. Pandey, J. Oeth, R. Tanaka, F. Suter, and R. Ferreira da Silva, “WRENCH: A Framework for Simulating Workflow Management Systems,” in 13th Workshop on Workflows in Support of Large-Scale Science (WORKS’18), 2018, p. 74–85.
    [Bibtex]
    @inproceedings{casanova-works-2018,
    title = {{WRENCH: A Framework for Simulating Workflow Management Systems}},
    author = {Casanova, Henri and Pandey, Suraj and Oeth, James and Tanaka, Ryan and Suter, Frederic and Ferreira da Silva, Rafael},
    booktitle = {13th Workshop on Workflows in Support of Large-Scale Science (WORKS'18)},
    year = {2018},
    pages = {74--85},
    doi = {10.1109/WORKS.2018.00013}
    }

 

1,459 views

Continue Reading

The Interplay of Workflow Execution and Resource Provisioning


Presentation held at the 18th SIAM Conference on Parallel Processing for Scientific Computing, 2018
Resource Management, Scheduling, Workflows: Critical Middleware for HPC and Clouds
Tokyo, Japan

Abstract – This talk will examine issues of workflow execution, in particular using the Pegasus Workflow Management System, on distributed resources and how these resources can be provisioned ahead of the workflow execution. Pegasus was designed, implemented and supported to provide abstractions that enable scientists to focus on structuring their computations without worrying about the details of the target cyberinfrastructure. To support these workflow abstractions Pegasus provides automation capabilities that seamlessly map workflows onto target resources, sparing scientists the overhead of managing the data flow, job scheduling, fault recovery and adaptation of their applications. In some cases, it is beneficial to provision the resources ahead of the workflow execution, enabling the re-use of resources across workflow tasks. The talk will examine the benefits of resource provisioning for workflow execution.

 

1,374 views

Continue Reading

On the Use of Burst Buffers for Accelerating Data-Intensive Scientific Workflows


Presentation held at the 12th Workflows in Support of Large-Scale Science, 2017
Denver, CO, USA – SuperComputing’17

Abstract – Science applications frequently produce and consume large volumes of data, but delivering this data to and from compute resources can be challenging, as parallel file system performance is not keeping up with compute and memory performance. To mitigate this I/O bottleneck, some systems have deployed burst buffers, but their impact on performance for real-world workflow applications is not always clear. In this paper, we examine the impact of burst buffers through the remote-shared, allocatable burst buffers on the Cori system at NERSC. By running a subset of the SCEC CyberShake workflow, a production seismic hazard analysis workflow, we find that using burst buffers offers read and write improvements of about an order of magnitude, and these improvements lead to increased job performance, even for long-running CPU-bound jobs.

 

Related Publication

  • [PDF] [DOI] R. Ferreira da Silva, S. Callaghan, and E. Deelman, “On the Use of Burst Buffers for Accelerating Data-Intensive Scientific Workflows,” in 12th Workshop on Workflows in Support of Large-Scale Science (WORKS’17), 2017.
    [Bibtex]
    @inproceedings{ferreiradasilva-works-2017,
    title = {On the Use of Burst Buffers for Accelerating Data-Intensive Scientific Workflows},
    author = {Ferreira da Silva, Rafael and Callaghan, Scott and Deelman, Ewa},
    booktitle = {12th Workshop on Workflows in Support of Large-Scale Science (WORKS'17)},
    year = {2017},
    pages = {},
    doi = {10.1145/3150994.3151000}
    }

 

1,586 views

Continue Reading

Using Simple PID Controllers to Prevent and Mitigate Faults in Scientific Workflows


Presentation held at the 11th Workflows in Support of Large-Scale Science, 2016
Salt Lake City, UT, USA – SuperComputing’16

Abstract – Scientific workflows have become mainstream for conducting large-scale scientific research. As a result, many workflow applications and Workflow Management Systems (WMSs) have been developed as part of the cyberinfrastructure to allow scientists to execute their applications seamlessly on a range of distributed platforms. In spite of many success stories, a key challenge for running workflows in distributed systems is failure prediction, detection, and recovery. In this paper, we propose an approach to use control theory developed as part of autonomic computing to predict failures before they happen, and mitigated them when possible. The proposed approach applying the proportional-integral-derivative controller (PID controller) control loop mechanism, which is widely used in industrial control systems, to mitigate faults by adjusting the inputs of the controller. The PID controller aims at detecting the possibility of a fault far enough in advance so that an action can be performed to prevent it from happening. To demonstrate the feasibility of the approach, we tackle two common execution faults of the Big Data era—data storage overload and memory overflow. We define, implement, and evaluate simple PID controllers to autonomously manage data and memory usage of a bioinformatics workflow that consumes/produces over 4.4TB of data, and requires over 24TB of memory to run all tasks concurrently. Experimental results indicate that workflow executions may significantly benefit from PID controllers, in particular under online and unknown conditions. Simulation results show that nearly-optimal executions (slowdown of 1.01) can be attained when using our proposed method, and faults are detected and mitigated far in advance of their occurrence.

 

Related Publication

  • [PDF] R. Ferreira da Silva, R. Filgueira, E. Deelman, E. Pairo-Castineira, I. M. Overton, and M. Atkinson, “Using Simple PID Controllers to Prevent and Mitigate Faults in Scientific Workflows,” in 11th Workflows in Support of Large-Scale Science, 2016, p. 15–24.
    [Bibtex]
    @inproceedings{ferreiradasilva-works-2016,
    author = {Ferreira da Silva, Rafael and Filgueira, Rosa and Deelman, Ewa and Pairo-Castineira, Erola and Overton, Ian Michael and Atkinson, Malcolm},
    title = {Using Simple PID Controllers to Prevent and Mitigate Faults in Scientific Workflows},
    year = {2016},
    booktitle = {11th Workflows in Support of Large-Scale Science},
    series = {WORKS'16},
    pages = {15--24}
    }

 

2,014 views

Continue Reading

Automating Real-time Seismic Analysis Through Streaming and High Throughput Workflows


Presentation held at the Workshop of Environmental Computing Applications, 2016
Baltimore, MD, USA – IEEE 12th International Conference on eScience

Abstract – In order to support the computational and data needs of today’s science, new knowledge must be gained on how to deliver the growing capabilities of the national cyberinfrastructures and more recently commercial clouds to the scientist’s desktop in an accessible, reliable, and scalable way. In over a decade of working with domain scientists, the Pegasus workflow management system has being used by researchers to model seismic wave propagation, to discover new celestial objects, to study RNA critical to human brain development, and to investigate other important research questions. Recently, the Pegasus and the dispel4py teams have collaborated to enable automated processing of real-time seismic interferometry and earthquake “repeater” analysis using data collected from the IRIS database. The proposed integrated solution empowers real-time stream-based workflows to seamlessly run on different distributed infrastructures (or in the wide area), where data is automatically managed by a task-oriented workflow system, which orchestrates the distributed execution. We have demonstrated the feasibility of this approach by using docker containers to deploy the workflow management systems and two different computing infrastructures: an Apache Storm cluster for real-time processing, and an MPI-based cluster for shared memory computing. Stream-based executions is managed by dispel4py, while the data movement between the clusters and the workflow engine (submit host) is managed by Pegasus.

 

Related Publication

  • [PDF] [DOI] R. Ferreira da Silva, E. Deelman, R. Filgueira, K. Vahi, M. Rynge, R. Mayani, and B. Mayer, “Automating Environmental Computing Applications with Scientific Workflows,” in Environmental Computing Workshop, IEEE 12th International Conference on e-Science, 2016, p. 400–406.
    [Bibtex]
    @inproceedings{ferreiradasilva-ecw-2016,
    author = {Ferreira da Silva, Rafael and Deelman, Ewa and Filgueira, Rosa and Vahi, Karan and Rynge, Mats and Mayani, Rajiv and Mayer, Benjamin},
    title = {Automating Environmental Computing Applications with Scientific Workflows},
    year = {2016},
    booktitle = {Environmental Computing Workshop, IEEE 12th International Conference on e-Science},
    series = {ECW'16},
    doi = {10.1109/eScience.2016.7870926},
    pages = {400--406}
    }

 

1,815 views

Continue Reading

Performance Analysis of an I/O-Intensive Workflow executing on Google Cloud and Amazon Web Services


Presentation held at the 18th Workshop on Advances in Parallel and Distributed Computational Models, 2016
Chicago, IL, USA – 30th IEEE International Parallel and Distributed Processing Symposium

Abstract – Scientific workflows have become the mainstream to conduct large-scale scientific research. In the meantime, cloud computing has emerged as an alternative computing paradigm. In this paper, we conduct an analysis of the performance of an I/O-intensive real scientific workflow on cloud environments using makespan (the turnaround time for a workflow to complete its execution) as the key performance metric. In particular, we assess the impact of varying the storage configurations on workflow performance when executing on Google Cloud and Amazon Web Services. We aim to understand the performance bottlenecks of the popular cloud-based execution environments. Experimental results show significant differences in application performance for different configurations. They also reveal that Amazon Web Services outperforms Google Cloud with equivalent application and system configurations. We then investigate the root cause of these results using provenance data and by benchmarking disk and network I/O on both infrastructures. Lastly, we also suggest modifications in the standard cloud storage APIs, which will reduce the makespan for I/O-intensive workflows.

 

Related Publication

  • [PDF] [DOI] H. Nawaz, G. Juve, R. Ferreira da Silva, and E. Deelman, “Performance Analysis of an I/O-Intensive Workflow executing on Google Cloud and Amazon Web Services,” in 18th Workshop on Advances in Parallel and Distributed Computational Models, 2016, p. 535–544.
    [Bibtex]
    @inproceedings{nawaz-apdcm-2016,
    author = {Nawaz, Hassan and Juve, Gideon and Ferreira da Silva, Rafael and Deelman, Ewa},
    title = {Performance Analysis of an I/O-Intensive Workflow executing on Google Cloud and Amazon Web Services},
    booktitle = {18th Workshop on Advances in Parallel and Distributed Computational Models},
    series = {APDCM'16},
    year = {2016},
    doi = {10.1109/IPDPSW.2016.90},
    pages = {535--544}
    }

 

1,860 views

Continue Reading