Open|SpeedShop is a community effort by The Krell Institute with current direct funding from DOE’s NNSA and Office of Science. It is building on top of a broad list of community infrastructures, most notably Dyninst and MRNet from UW, libmonitor from Rice, and PAPI from UTK. Open|SpeedShop is an open source multi platform Linux performance tool which is targeted to support performance analysis of applications running on both single node and large scale IA64, IA32, EM64T, AMD64, PPC, ARM, Blue Gene and Cray platforms.
Open|SpeedShop is explicitly designed with usability in mind and is for application developers and computer scientists. The base functionality include:
- Program Counter Sampling
- Support for Callstack Analysis
- Hardware Performance Counter Sampling and Threshold based
- MPI Lightweight Profiling and Tracing
- I/O Lightweight Profiling and Tracing
- Floating Point Exception Analysis
- Memory Trace Analysis
- POSIX Thread Trace Analysis
In addition, Open|SpeedShop is designed to be modular and extensible. It supports several levels of plug-ins which allow users to add their own performance experiments.
Open|SpeedShop development is hosted by the Krell Institute. The infrastructure and base components of Open|SpeedShop are released as open source code primarily under LGPL.
- Comprehensive performance analysis for sequential, multithreaded, and MPI applications
- No need to recompile the user’s application.
- Supports both first analysis steps as well as deeper analysis options for performance experts
- Easy to use GUI and fully scriptable through a command line interface and Python
- Supports Linux Systems and Clusters with Intel and AMD processors
- Extensible through new performance analysis plugins ensuring consistent look and feel
- In production use on all major cluster platforms at LANL, LLNL, and SNL
- Four user interface options: batch, command line interface, graphical user interface and Python scripting API.
- Supports multi-platform single system image(SSI) and traditional clusters.
- Scales to large numbers of processes, threads, and ranks.
- Ability to automatically create and attach to both sequential and parallel jobs from within Open|SpeedShop.
- View performance data using multiple customizable views.
- Save and restore performance experiment data and symbol information for post experiment performance analysis
- View performance data for all of application’s lifetime or smaller time slices.
- Compare performance results between processes, threads, or ranks between a previous experiment and current experiment.
- GUI Wizard facility and context sensitive help.
- Interactive CLI help facility which lists the CLI commands, syntax, and typical usage.
- Python Scripting API accesses Open|SpeedShop functionality corresponding to CLI commands.
- Option to automatically group like performing processes, threads, or ranks.
- Create traces in OTF (Open Trace Format).
How-To-Use Open|SpeedShop HPC-Admin Magazine Article
In this article, we will describe how to use Open|SpeedShop through step-by-step examples illustrating how to find a number of different performance bottlenecks. Additionally, we will describe the tool’s most common usage model (workflow) and provide several performance data viewing options.
Open|SpeedShop at SC14
We will be at Super Computing in 2014. This year we will not have a dedicated booth, but will be giving demonstrations and will be available for discussion at the DOE booth (1939):
and at the Emerging Technologies booth (Show Floor: 233):
We will also be available for informal demonstrations any of the days starting Monday morning. Please send email to jeg at krellinst.org to schedule.
Members of our team are presenting the “How to Analyze the Performance of Parallel Codes 101″ tutorial on Sunday, 11/16/14 Half-Day (8:30am-12pm). See this page for the slides: http://www.openspeedshop.org/wp/?p=942