Task Resource Consumption Prediction for Scientific Applications and Workflows


Presentation held at the Algorithms and Scheduling Techniques to Manage Resilience and Power Consumption in Distributed Systems 2015
Dagstuhl, Germany

Abstract – Estimates of task runtime, disk space usage, and memory consumption, are commonly used by scheduling and resource provisioning algorithms to support efficient and reliable scientific application executions. Such algorithms often assume that accurate estimates are available, but such estimates are difficult to generate in practice. In this work, we first profile real scientific applications and workflows, collecting fine-grained information such as process I/O, runtime, memory usage, and CPU utilization. We then propose a method to automatically characterize task requirements based on these profiles. Our method estimates task runtime, disk space, and peak memory consumption. It looks for correlations between the parameters of a dataset, and if no correlation is found, the dataset is divided into smaller subsets using the statistical recursive partitioning method and conditional inference trees to identify patterns that characterize particular behaviors of the workload. We then propose an estimation process to predict task characteristics of scientific applications based on the collected data. For scientific workflows, we propose an online estimation process based on the MAPE-K loop, where task executions are monitored and estimates are updated as more information becomes available. Experimental results show that our online estimation process results in much more accurate predictions than an offline approach, where all task requirements are estimated prior to workflow execution.

 

Related Publications

  • [PDF] [DOI] R. Ferreira da Silva, G. Juve, M. Rynge, E. Deelman, and M. Livny, “Online Task Resource Consumption Prediction for Scientific Workflows,” Parallel Processing Letters, vol. 25, iss. 3, 2015.
    [Bibtex]
    @article{ferreiradasilva-ppl-2015,
    title = {Online Task Resource Consumption Prediction for Scientific Workflows},
    author = {Ferreira da Silva, Rafael and Juve, Gideon and Rynge, Mats and Deelman, Ewa and Livny, Miron},
    journal = {Parallel Processing Letters},
    volume = {25},
    number = {3},
    pages = {},
    year = {2015},
    doi = {10.1142/S0129626415410030}
    }
  • [PDF] [DOI] R. Ferreira da Silva, M. Rynge, G. Juve, I. Sfiligoi, E. Deelman, J. Letts, F. Würthwein, and M. Livny, “Characterizing a High Throughput Computing Workload: The Compact Muon Solenoid (CMS) Experiment at LHC,” Procedia Computer Science, vol. 51, pp. 39-48, 2015.
    [Bibtex]
    @article{ferreiradasilva-iccs-2015,
    title = {Characterizing a High Throughput Computing Workload: The Compact Muon Solenoid ({CMS}) Experiment at {LHC}},
    author = {Ferreira da Silva, Rafael and Rynge, Mats and Juve, Gideon and Sfiligoi, Igor and Deelman, Ewa and Letts, James and W\"urthwein, Frank and Livny, Miron},
    journal = {Procedia Computer Science},
    year = {2015},
    volume = {51},
    pages = {39--48},
    note = {International Conference On Computational Science, \{ICCS\} 2015 Computational Science at the Gates of Nature},
    doi = {10.1016/j.procs.2015.05.190}
    }
  • [PDF] [DOI] R. Ferreira da Silva, G. Juve, E. Deelman, T. Glatard, F. Desprez, D. Thain, B. Tovar, and M. Livny, “Toward fine-grained online task characteristics estimation in scientific workflows,” in 8th Workshop on Workflows in Support of Large-Scale Science, 2013, pp. 58-67.
    [Bibtex]
    @inproceedings{ferreiradasilva-works-2013,
    author = {Ferreira da Silva, Rafael and Juve, Gideon and Deelman, Ewa and Glatard, Tristan and Desprez, Fr{\'e}d{\'e}ric and Thain, Douglas and Tovar, Benjamin and Livny, Miron},
    title = {Toward fine-grained online task characteristics estimation in scientific workflows},
    booktitle = {8th Workshop on Workflows in Support of Large-Scale Science},
    series = {WORKS '13},
    year = {2013},
    pages = {58--67},
    doi = {10.1145/2534248.2534254},
    }

 

465 views

Continue Reading

Characterizing a High Throughput Computing Workload: The Compact Muon Solenoid (CMS) Experiment at LHC


Presentation held at ICCS 2015 Conference, 2015
Reykjavik, Iceland

High throughput computing (HTC) has aided the scientific community in the analysis of vast amounts of data and computational jobs in distributed environments. To manage these large workloads, several systems have been developed to efficiently allocate and provide access to distributed resources. Many of these systems rely on job characteristics estimates (e.g., job runtime) to characterize the workload behavior, which in practice is hard to obtain. In this work, we perform an exploratory analysis of the CMS experiment workload using the statistical recursive partitioning method and conditional inference trees to identify patterns that characterize particular behaviors of the workload. We then propose an estimation process to predict job characteristics based on the collected data. Experimental results show that our process estimates job runtime with 75% of accuracy on average, and produces nearly optimal predictions for disk and memory consumption.

 

Related Publication

  • [PDF] [DOI] R. Ferreira da Silva, M. Rynge, G. Juve, I. Sfiligoi, E. Deelman, J. Letts, F. Würthwein, and M. Livny, “Characterizing a High Throughput Computing Workload: The Compact Muon Solenoid (CMS) Experiment at LHC,” Procedia Computer Science, vol. 51, pp. 39-48, 2015.
    [Bibtex]
    @article{ferreiradasilva-iccs-2015,
    title = {Characterizing a High Throughput Computing Workload: The Compact Muon Solenoid ({CMS}) Experiment at {LHC}},
    author = {Ferreira da Silva, Rafael and Rynge, Mats and Juve, Gideon and Sfiligoi, Igor and Deelman, Ewa and Letts, James and W\"urthwein, Frank and Livny, Miron},
    journal = {Procedia Computer Science},
    year = {2015},
    volume = {51},
    pages = {39--48},
    note = {International Conference On Computational Science, \{ICCS\} 2015 Computational Science at the Gates of Nature},
    doi = {10.1016/j.procs.2015.05.190}
    }

 

234 views

Continue Reading

A science-gateway workload archive to study pilot jobs, user activity, bag of tasks, task sub-steps, and workflow executions


Presentation held at Workshop on Grids, Clouds, and P2P Computing (CGWS), 2012
Rhodes Island, Greece – Euro-Par 2012

Abstract – Archives of distributed workloads acquired at the infrastructure level reputably lack information about users and application-level middleware. Science gateways provide consistent access points to the infrastructure, and therefore are an interesting information source to cope with this issue. In this paper, we describe a workload archive acquired at the science-gateway level, and we show its added value on several case studies related to user accounting, pilot jobs, fine-grained task analysis, bag of tasks, and workflows. Results show that science-gateway workload archives can detect workload wrapped in pilot jobs, improve user identification, give information on distributions of data transfer times, make bag-of-task detection accurate, and retrieve characteristics of workflow executions. Some limits are also identified.

 

Related Publication

  • [PDF] [DOI] R. Ferreira da Silva and T. Glatard, “A Science-Gateway Workload Archive to Study Pilot Jobs, User Activity, Bag of Tasks, Task Sub-steps, and Workflow Executions,” in Euro-Par 2012: Parallel Processing Workshops, I. Caragiannis, M. Alexander, R. Badia, M. Cannataro, A. Costan, M. Danelutto, F. Desprez, B. Krammer, J. Sahuquillo, S. Scott, and J. Weidendorfer, Eds., , 2013, vol. 7640, pp. 79-88.
    [Bibtex]
    @incollection{ferreiradasilva-cgws-2013,
    year = {2013},
    booktitle = {Euro-Par 2012: Parallel Processing Workshops},
    volume = {7640},
    series = {Lecture Notes in Computer Science},
    editor = {Caragiannis, Ioannis and Alexander, Michael and Badia, RosaMaria and Cannataro, Mario and Costan, Alexandru and Danelutto, Marco and Desprez, Fr\'ed\'eric and Krammer, Bettina and Sahuquillo, Julio and Scott, StephenL. and Weidendorfer, Josef},
    doi = {10.1007/978-3-642-36949-0_10},
    title = {A Science-Gateway Workload Archive to Study Pilot Jobs, User Activity, Bag of Tasks, Task Sub-steps, and Workflow Executions},
    author = {Ferreira da Silva, Rafael and Glatard, Tristan},
    pages = {79--88}
    }

 

221 views

Continue Reading