All dates, times and locations of tech program events or other scheduled plans are subject to change. Please check back regularly to ensure you view the most up-to-date version of the schedule.
The stakeholders of high performance computing (HPC) systems include users who run applications, application developers, support and consulting staff of HPC facilities, systems administrators, managers of HPC facilities and the funders of HPC facilities. Maximally productive use of these highly utilized and often oversubscribed systems, which enable computational and data intensive engineering and science, is important to all the above mentioned stakeholders. The recent availability of comprehensive resource use data at the job/core level enables a new data driven approach to systems design, optimization and management. The emerging domain of "data driven/data intensive" science can now be applied to management of HPC systems. With the framework described here, the execution behavior of all jobs on an HPC system can be characterized in terms of their resource use without the need to recompile the application code. This data can also be aggregated across many different sets of attributes including users, applications, application domains, "good" or "bad" resource use patterns, and is readily available through the XDMoD and Open XDMoD tools (XDMoD is specific for XSEDE resources while Open XDMoD is open source and available for any HPC center). XDMoD (and Open XDMoD) implements a very broad range of reports targeting the information needs of all stakeholder types.
The XDMoD system is or will be available on most or all major XD Net HPC resource providers including Stampede. Open XDMoD for academic and industrial HPC centers is available for download now. The information needs of all stakeholders in HPC systems ranging across individual users, developers of community codes, systems administrators and consultants, systems managers and facility funding agencies can thus all be systematically and comprehensively addressed.
The design emphasis for supercomputing systems has moved from raw performance to performance-per-watt, and as a result, supercomputing architectures are converging on processor with wide vector units and many processing cores per chip. Such processors are capable of performant image rendering purely in software. This improved capability is fortuitous, since the prevailing homogeneous system designs lack dedicated, hardware-accelerated rendering subsystems for use in data visualization. Reliance on this ”software-defined” rendering capability will grow in importance since, due to growing data sizes, visualizations must be performed on the same machine where the data is produced. Further, as data sizes outgrow disk I/O capacity, visualization will be increasingly incorporated into the simulation code itself (in situ visualization).
Fortunately, a number of recently-developed software packages capture the improved rendering performance of general-purpose processors and expose that performance through common visualization toolkits. Among these packages are those that render via rasterization, the algorithm used in hardware accelerators, as well as those that render via ray tracing, an algorithm that physically simulates the travel of light. To produce insightful visualizations most efficiently, the visualization scientist should understand the capabilities, performance characteristics and precision limits of each rendering method.
In this half-day introductory tutorial, we present a primer on rasterization and ray tracing and an overview of (mostly) open-source software packages available to the open-science community, as well as hands-on experience with the fundamental techniques. We begin with a brief background of terms and concepts to ensure that all participants have a working knowledge of the material covered in the remainder of the tutorial. We then motivate the concepts through three application lightning talks that demonstrate the use of rasterization and ray tracing in actual domain applications. Finally, participants will apply the concepts in guided hands-on visualization labs using the TACC XSEDE resources Stampede and Maverick.