This is a short list of featured presentations. For a complete list of my presentations, please visit my SlideShare profile.


  • Using Simple PID Controllers to Prevent and Mitigate Faults in Scientific Workflows

    Presentation held at the 11th Workflows in Support of Large-Scale Science, 2016 Salt Lake City, UT, USA – SuperComputing’16 Abstract – Scientific workflows have become mainstream for conducting large-scale scientific research. As a result, many workflow applications and Workflow Management Systems (WMSs) have been developed as part of the cyberinfrastructure to allow scientists to execute […]

  • Automating Real-time Seismic Analysis Through Streaming and High Throughput Workflows

    Presentation held at the Workshop of Environmental Computing Applications, 2016 Baltimore, MD, USA – IEEE 12th International Conference on eScience Abstract – In order to support the computational and data needs of today’s science, new knowledge must be gained on how to deliver the growing capabilities of the national cyberinfrastructures and more recently commercial clouds […]

  • Performance Analysis of an I/O-Intensive Workflow executing on Google Cloud and Amazon Web Services

    Presentation held at the 18th Workshop on Advances in Parallel and Distributed Computational Models, 2016 Chicago, IL, USA – 30th IEEE International Parallel and Distributed Processing Symposium Abstract – Scientific workflows have become the mainstream to conduct large-scale scientific research. In the meantime, cloud computing has emerged as an alternative computing paradigm. In this paper, […]

  • Pegasus: automate, recover, and debug scientific computations

    Automate the scientific computational work as portable workflows. Automatically locates the necessary input data and computational resources, and manages storage space for executing data-intensive workflows on storage-constrained resources.Recover from failures at runtime. Task are automatically retried in the presence of errors. A rescue workflow containing a description of only the work that remains is provided. […]

  • Task Resource Consumption Prediction for Scientific Applications and Workflows

    Presentation held at the Algorithms and Scheduling Techniques to Manage Resilience and Power Consumption in Distributed Systems 2015 Dagstuhl, Germany Abstract – Estimates of task runtime, disk space usage, and memory consumption, are commonly used by scheduling and resource provisioning algorithms to support efficient and reliable scientific application executions. Such algorithms often assume that accurate […]

  • Characterizing a High Throughput Computing Workload: The Compact Muon Solenoid (CMS) Experiment at LHC

    Presentation held at ICCS 2015 Conference, 2015 Reykjavik, Iceland High throughput computing (HTC) has aided the scientific community in the analysis of vast amounts of data and computational jobs in distributed environments. To manage these large workloads, several systems have been developed to efficiently allocate and provide access to distributed resources. Many of these systems […]

  • A science-gateway workload archive to study pilot jobs, user activity, bag of tasks, task sub-steps, and workflow executions

    Presentation held at Workshop on Grids, Clouds, and P2P Computing (CGWS), 2012 Rhodes Island, Greece – Euro-Par 2012 Abstract – Archives of distributed workloads acquired at the infrastructure level reputably lack information about users and application-level middleware. Science gateways provide consistent access points to the infrastructure, and therefore are an interesting information source to cope […]