Loading…
XSEDE15 has ended
All dates, times and locations of tech program events or other scheduled plans are subject to change. Please check back regularly to ensure you view the most up-to-date version of the schedule.
Sunday, July 26
 

3:00pm CDT

Registration
Registration

Sunday July 26, 2015 3:00pm - 7:00pm CDT
Landmark Registration Office

6:30pm CDT

Student & Student Mentors Dinner (closed)
This is a closed event limited to students participating in the XSEDE15 student program and their mentors
 
•6:30-6:45pm Arrive
•6:45-7:00pm Ice breaker activity
•7:00-7:15pm Dinner (buffet)
•7:15-8:15pm Intro and Keynote from Bob Seidel
•8:15-8:30pm Q&A/Networking
 
Dinner speaker: Ed Seidel, Director National Center for Supercomputing Applications
 
Supercomputing has reached a level of maturity and capability where many areas of science and engineering are not only advancing rapidly due to computing power, they cannot progress without it. Detailed simulations of complex astrophysical phenomena, HIV, earthquake events, and industrial engineering processes are being done, leading to major scientific breakthroughs or new products that cannot be achieved any other way. These simulations typically require larger and larger teams, with more and more complex software environments to support them, as well as real world data. But as experiments and observation systems are now generating unprecedented amounts of data, which also must be analyzed via large-scale computation and compared with simulation, a new type of highly integrated environment must be developed where computing, experiment, and data services will need to be developed together. I will illustrate examples from NCSA's Blue Waters supercomputer, and from major data-intensive projects including the Large Synoptic Survey Telescope, and give thoughts on what will be needed going forward.

Speakers

Sunday July 26, 2015 6:30pm - 8:30pm CDT
Statler Ballroom

7:00pm CDT

Campus Champion Meeting
Sunday July 26, 2015 7:00pm - 9:00pm CDT
Crystal Ballroom
 
Monday, July 27
 

7:00am CDT

Tutorial Breakfast
Tutorial breakfast for tutorial presenters and attendees only

Monday July 27, 2015 7:00am - 8:00am CDT
Majestic E

7:00am CDT

Registration
Registration

Monday July 27, 2015 7:00am - 4:30pm CDT
Landmark Registration Office

8:00am CDT

Student Tutorial: Supercomputing in Plain English, Part 1
* Lecture: Overview: What the Heck is Supercomputing? This session provides a broad overview of High Performance Computing (HPC). Topics include: what is supercomputing?; the fundamental issues of HPC (storage hierarchy, parallelism); hardware primer; introduction to the storage hierarchy; introduction to parallelism via an analogy (multiple people working on a jigsaw puzzle); Moore's Law; the motivation for using HPC. * Lab: Running A Job on a Supercomputer In this hands-on lab session, you’ll get an account on one or more supercomputers, and you'll get a chance to run a job. If your Unix/Linux skills have gotten a little rusty, this will be a great refresher. * Lecture: The Tyranny of the Storage Hierarchy This session focuses on the implications of a fundamental reality: fast implies expensive implies small, and slow implies cheap implies large. Topics include: registers; cache, RAM, and the relationship between them; cache hits and misses; cache lines; cache mapping strategies (direct, fully associative, set associative); cache conflicts; write-through vs. write-back; locality; tiling; hard disk; virtual memory. A key point: Parallel performance can be hard to predict or achieve without understanding the storage hierarchy. * Lab: Running Benchmarks on a Supercomputer In this hands-on lab session, you'll benchmark a matrix-matrix multiply code to discover the configuration that gets the best performance. * Other topics may be introduced if time permits. Content: Older versions of the lecture slides and exercise descriptions (which will be updated) may be found as follows: * Lecture: Overview: What the Heck is Supercomputing? http://www.oscer.ou.edu/ncsiworkshop2012intropar_sipe_overview_20120729.pdf * Lab: Running A Job on a Supercomputer http://www.oscer.ou.edu/ncsiworkshop2012intropar_exercise_learningbatch_boomer_20120729.pdf * Lecture: The Tyranny of the Storage Hierarchy http://www.oscer.ou.edu/ncsiworkshop2012intropar_sipe_storage_20120730.pdf * Lab: Running Benchmarks on a Supercomputer http://www.oscer.ou.edu/ncsiworkshop2012intropar_exercise_tiling_boomer_20120729.pdf Prerequisites: one semester of programming in C or C++, recently; basic experience with any Unix-like operating system (could be Linux but doesn't have to be), recently. (Attendees with Fortran experience will be able to follow along.) No previous HPC experience will be required. Requirements: Laptop (Windows, MacOS or Linux) strongly recommended but not required.


Monday July 27, 2015 8:00am - 12:00pm CDT
Majestic F

8:00am CDT

Tutorial: An Introduction to DataSpaces: An Extreme-Scale Data Management Framework
High performance infrastructures enable large-scale scientific applications and workflows to run with increased complexity and improved accuracy. However, as the size of these workflows grow, they are generating massive amounts of data that must be processed and analyzed in order to gain insights into the phenomena being studied. Traditional methods of running these types of workflows involves the applications writing and reading data from the parallel file system (or hard disk), but this approach is not practical for exascale. To solve this problem, the DataSpaces framework provides an in-memory data staging area that applications can utilize in order to communicate and interact during runtime without reading or writing from disk. The approach reduces the read/write access times and allows different workflow stages to overlap when possible, thereby reducing the overall time to completion of the workflow. The goal for the DataSpaces project is to allow scientists and application programmers to run their codes efficiently on a variety of HPC infrastructures, without knowledge of the underlying I/O subsystems or management (including high-speed interconnects). In this tutorial, we will demonstrate the easy-to-use DataSpaces API and show application programmers how to enable data staging by integrating their code with DataSpaces. We will then show users how to run these codes on high-performance machines in order to accelerate the I/O associated with their applications and reduce the overall time-to-completion of the workflow, using TACC Stampede as an example.


Monday July 27, 2015 8:00am - 12:00pm CDT
Landmark 4

8:00am CDT

Tutorial: Introduction to CUDA Programming in C and Fortran
This tutorial is a beginning/intermediate course on programming NVIDIA GPUs with CUDA. After a short segment on why we are using accelerators in high performance computing and on accelerator hardware, we will describe all of the pieces necessary to write a CUDA program in C and Fortran. The example will be a stencil update, which is simple enough to be written in a few lines of code. The code design will be guided by the hardware; we will put emphasis on motivating common design principles by the desire to write fast code for GPU accelerators. In the second part of the presentation, we will focus on two common optimization strategies: using shared memory and overlapping computation with data transfer using CUDA streams. Experience with writing serial code in C or Fortran will be helpful to follow the examples.


Monday July 27, 2015 8:00am - 12:00pm CDT
Majestic A

8:00am CDT

Tutorial: MATLAB Hosting Program: Running MATLAB on XSEDE Resources and Scaling with Parallel Computing using a Hosted License Manager
In the past, MATLAB users have been required to use only their local cluster resources to run MATLAB. Recently, MathWorks has created a new initiative, the MATLAB Hosting Program, which allows HPC data centers to not only host hardware resources, but install MATLAB software to be made available to external users. In turn, MATLAB users can leverage their own license to be used at the data center. Both license requirements and as well as licensing options will be discussed during the tutorial.
 
The MATLAB Hosting Program has now been made available to XSEDE Resources, such as PSC, TACC, and SDSC and will be expanding to more sites in 2015. This will allow MATLAB users a multitude of compute resources to run the models and simulations, accessing remote hardware resources (e.g. GPUs) and large data sets. This tutorial will describe to campus champions and end users how to run MATLAB remotely as well as how to scale their code to run across multiple nodes using the Parallel Computing Toolbox and the MATLAB Distributed Computing Server. As each XSEDE Resource has different ways to run MATLAB, the tutorial will walk through how to submit to each of the sites.


Monday July 27, 2015 8:00am - 12:00pm CDT
Majestic B

8:00am CDT

Tutorial: Open XDMoD – Providing comprehensive resource management for HPC Systems: Democratization of Access to HPC System Resource Use Information
The stakeholders of high performance computing (HPC) systems include users who run applications, application developers, support and consulting staff of HPC facilities, systems administrators, managers of HPC facilities and the funders of HPC facilities. Maximally productive use of these highly utilized and often oversubscribed systems, which enable computational and data intensive engineering and science, is important to all the above mentioned stakeholders. The recent availability of comprehensive resource use data at the job/core level enables a new data driven approach to systems design, optimization and management. The emerging domain of "data driven/data intensive" science can now be applied to management of HPC systems. With the framework described here, the execution behavior of all jobs on an HPC system can be characterized in terms of their resource use without the need to recompile the application code. This data can also be aggregated across many different sets of attributes including users, applications, application domains, "good" or "bad" resource use patterns, and is readily available through the XDMoD and Open XDMoD tools (XDMoD is specific for XSEDE resources while Open XDMoD is open source and available for any HPC center). XDMoD (and Open XDMoD) implements a very broad range of reports targeting the information needs of all stakeholder types.
 
The XDMoD system is or will be available on most or all major XD Net HPC resource providers including Stampede. Open XDMoD for academic and industrial HPC centers is available for download now. The information needs of all stakeholders in HPC systems ranging across individual users, developers of community codes, systems administrators and consultants, systems managers and facility funding agencies can thus all be systematically and comprehensively addressed.


Monday July 27, 2015 8:00am - 12:00pm CDT
Landmark 5

8:00am CDT

Tutorial: Optimization and Tuning of MPI and PGAS Applications using MVAPICH2 and MVAPICH2-X Libraries
MVAPICH2 software, supporting the latest MPI 3.0 standard, delivers best performance, scalability and fault tolerance for high-end computing systems and servers using InfiniBand, 10/40 GigE/iWARP and RoCE networking technologies. MVAPICH2-X software package provides support for hybrid MPI+PGAS (UPC, OpenSHMEM and CAF) programming models with unified communication runtime. MVAPICH2 and MVAPICH2-X software libraries (http://mvapich.cse.ohio-state.edu) are powering several supercomputers in the XSEDE program including Gordon, Keenland, Lonestar4, Trestles and Stampede. These software libraries are being used by more than 2,350 organizations world-wide in 75 countries to extract the potential of these emerging networking technologies for modern systems. As of April '15, more than 248,000 downloads have taken place from this project's site. These software libraries are also powering several supercomputers in the TOP 500 list like Stampede, Tsubame 2.5 and Pleiades.
 
A large number of XSEDE users are using these libraries on a daily-basis to run their MPI and PGAS applications. However, many of these users and the corresponding system administrators are not fully aware of all features, optimizations and tuning techniques associated with these libraries. This tutorial is aimed to address these concerns. Further, as accelerators such as GPUs and MICs are commonly available on XSEDE resources, we present design supports and optimization techniques for such systems. We will start with an overview of the MVAPICH2 and MVAPICH2-X libraries and their features. Next, we will focus on installation guidelines, runtime optimizations and tuning flexibility in-depth. An overview of configuration and debugging support in MVAPICH2 and MVAPICH2-X will be presented. Support for GPUs and MIC enabled systems will be presented. Advanced optimization and tuning of MPI applications using the new MPI-T feature (as defined by MPI-3 standard) in MVAPICH2 will also be discussed. The impact on performance of the various features and optimization techniques will be discussed in an integrated fashion. Further, we present case study of application redesign to take advantage of hybrid MPI+PGAS programming models.


Monday July 27, 2015 8:00am - 12:00pm CDT
Majestic C

8:00am CDT

Tutorial: Parallel I/O - for Reading and Writing Large Files in Parallel
Developing an understanding of efficient parallel I/O and adapting your application accordingly can result in orders of magnitude of performance gains without overloading the parallel file system. This half-day tutorial will provide an overview of the practices and strategies for the efficient utilization of parallel file systems through parallel I/O for achieving high performance. The target audience is analysts and application developers who do not have prior experience with MPI I/O, HDF5, and T3PIO. However, they should be familiar with C/C++/Fortan programming and basic MPI. A brief overview of the related basic concepts will be included in the tutorial where needed.
 
All the concepts related to the tutorial will be explained with examples and there will be a laboratory/hands-on session. In the hands-on session, the audience will be given four exercises in a time-period of one hour. They will be provided with the skeleton programs written in C/Fortran and the instructions to modify the programs such that the modified programs can do parallel I/O. The programs provided for the hands-on session will include comments/place-holders to guide the audience in modifying the code. The hands-on session will help the audience to test the knowledge gained during the tutorial. By the end of the tutorial, the audience will have learned how to perform parallel I/O (through MPI I/O and the high-level libraries discussed in this tutorial) and will be motivated to apply the knowledge gained to get much higher I/O performance from their applications than earlier.
 
Because this tutorial will include a hands-on session, the audience will be provided access to the Stampede, a 10 PFLOPS Dell Linux Cluster at TACC, to carry out the exercises. The audience will need personal laptops to remote login into Stampede via SSH. Hence the SSH client or terminal access should be available on the laptops to be used during the tutorial.


Monday July 27, 2015 8:00am - 12:00pm CDT
Landmark 7

8:00am CDT

Tutorial: Scientific Visualization
Visualization is largely understood and used as an excellent communication tool by researchers. This narrow view often keeps scientists from fully using and developing their visualization skillset. This tutorial will provide a "from the ground up" understanding of visualization and its utility in error diagnostic and exploration of data for scientific insight. When used effectively visualization can provide a complementary and effective toolset for data analysis, which is one of the most challenging problems in computational domains. In this tutorial we plan to bridge these gaps by providing end users with fundamental visualization concepts, execution tools, customization and usage examples. Finally, a short introduction to SeedMe.org where users will learn how to share their visualization results ubiquitously. The tutorial will comprise of four closely related sessions as follows: 1 Visualization fundamentals: Lecture – Assay of standard techniques and their utility (45 min) 2 Hands on Visualization with VisIt software on your computer/laptop (80 mins)3 Remote visualization with VisIt software on Comet/Gordon cluster at SDSC. We will provide training accounts. (30 mins) 4 VisIt now ships with SeedMe Python module for sharing visualizations rapidly. A swift introduction to using SeedMe.org will be provided (20 mins) . Pre-requisites: None Level: Introductory/Beginner Requirements: 1. Computer, mouse with scroll wheel 2. VisIt software installed on your computer (version will be provided later)3. Download Sample Data https://wci.llnl.gov/content/assets/docs/simulation/computer-codes/visit/visit_data_files.tar.gz4. Account on SeedMe.orgSlides and recording of recent tutorial are available at http://www.sdsc.edu/Events/gordonviz2015/index.html

Speakers

Monday July 27, 2015 8:00am - 12:00pm CDT
Landmark 2

8:00am CDT

Tutorial: SciGaP Tutorial: Developing Science Gateways using Apache Airavata
Science gateways, or Web portals, are an important mechanism for broadening and simplifying access to computational grids, clouds, and campus resources. Gateways provide science-specific user interfaces to end users who are unfamiliar with or need more capabilities than provided by command-line interfaces. In this tutorial, we present SciGaP, which includes software from the CIPRES, UltraScan, Neuroscience, GridChem Gateways combined with the Apache Airavata server-side system for managing jobs and data. Our goal is to show participants how to build and run gateways using both software and collected experience from some of the most heavily used XSEDE science gateways.


Monday July 27, 2015 8:00am - 12:00pm CDT
Landmark 6

8:00am CDT

Tutorial: Secure Coding Practices
This tutorial is relevant to anyone wanting to learn about minimizing security flaws in the software they develop or manage. We share our experiences gained from performing vulnerability assessments of critical middleware. You will learn skills critical for software developers and analysts concerned with security.
 
This tutorial starts by presenting basic concepts related to threats, weaknesses and vulnerabilities. We will also show how to think like an attacker. The rest of the tutorial presents coding practices that lead to vulnerabilities, with examples of how they commonly arise, techniques to prevent them, and exercises to reinforce you skills in avoiding them. Examples come from a wide variety of languages, including Java, C, C++, C#, Perl, Python, and Ruby, and come from real code belonging to Cloud and Grid systems we have assessed. This tutorial is an outgrowth of our experiences in performing vulnerability assessment of critical middleware and services including well known systems such as Google Chrome, Wireshark, and HTCondor.


Monday July 27, 2015 8:00am - 12:00pm CDT
Landmark 1

8:00am CDT

Tutorial: Spark: Big Data processing framework
This tutorial, suitable for attendees with a basic level knowledge in data analysis, will provide a comprehensive overview on the big data analysis framework called Spark. Participants will gain insight knowledge of large-scale data analysis tool along with hands-on experience of real world use cases on XSEDE resource.
 
Data is growing in different sectors such scientific domains, business and industries. There are many data processing frameworks available to deal with Big Data analysis. Among those, Hadoop is the most popular tool, although Hadoop cannot handle real-time, interactive data processing. Spark is the next generation data analytics framework which can overcome Hadoop drawbacks. It also provides in-memory computing which is orders of magnitude faster than Hadoop. This tutorial will focus on introduction to Spark and Spark libraries such as Spark SQL, Spark Streaming, MLlib (machine learning), and GraphX (graph). It will also cover deployment of Spark on XSEDE resources, case studies, and hands-on exercises. At the end of tutorial, attendees will have practical knowledge of Spark, its libraries, which could be applied to their relevant domains.


Monday July 27, 2015 8:00am - 12:00pm CDT
Majestic D

8:00am CDT

Tutorial: XSEDE New User Tutorial
This tutorial will provide training and hands-on activities to help new users learn and become comfortable with the basic steps necessary to first obtain, and then successfully employ an XSEDE allocation to accomplish their research or educational goals. The tutorial will comprise of four sections: The first part of the tutorial will explain the XSEDE allocations process and how to write and submit successful allocation proposals. The instructor will describe the contents of an outstanding proposal and the process for generating each part. Topics covered will include the scientific justification, the justification of the request for resources, techniques for producing meaningful performance and scaling benchmarks, and navigating the XRAS system through the XSEDE Portal for electronic submission of proposals. The second section, "Information Security Training for XSEDE Researchers," will review basic information security principles for XSEDE users including: how to protect yourself from on-line threats and risks, how to secure your desktop/laptop, safe practices for social networking, email and instant messaging, how to choose a secure password and what to do if you account or machine have been compromised. The third part of the tutorial will cover the New User Training material that is been delivered remotely quarterly, but will delve deeper into these topics. New topics will be covered, including how to troubleshoot a job that has not run, and how to improve job turnaround by understanding differences in batch job schedulers on different platforms. We will demonstrate how to perform the various tasks with live, hands-on activities and personalized help. The practice section of the tutorial will consist of hands-on activities including submitting a job, figuring out why it has not run and transferring files between supercomputers. In the event of network issues we will have demos available as a backup. We anticipate significant interest from Campus Champions, and therefore we will explain how attendees can assist others, as well as briefly describe projects that are being currently carried out in non-traditional HPC disciplines.


Monday July 27, 2015 8:00am - 12:00pm CDT
Landmark 3

8:00am CDT

Tutorial: Heterogeneous Computing on Stampede
Today, many new HPC systems such as Stampede are heterogeneous systems containing traditional processors, co-processors, and/or accelerators. These hardware designs for performance and efficiency greatly complicate the development of large scale applications that now require the consideration of at least three different levels of parallelism (MPI, threading, SIMD) with quite different performance considerations.
 
This tutorial will target application developers who are developing applications for multi- and many- core systems with a focus on the Intel Xeon Phi platform, also known as the MIC. In addition to a presentation of the programming models that are available on systems with MIC co-processors, the tutorial will provide users with hands-on guidance in using wide SIMD units through vectorization, large thread counts on each node through OpenMP, and symmetric and offload modes of execution. Accounts on Stampede will be used for the laboratory exercises.
 
At the completion of the tutorial, the audience will:
* Have hands on experience with the use of Intel's vec-reports options.
* Understand how to create arrays aligned for optimal vectorization.
* Be able to create asynchronous offload codes and control data persistence on the Xeon Phi.
* Understand how to properly place threads/tasks on the Xeon Phi when running in either offload or symmetric mode.


Monday July 27, 2015 8:00am - 4:30pm CDT
Majestic H

8:00am CDT

Tutorial: Scaling R Computation for Big Data processing with XSEDE Resources
While R has not been considered a traditional application in HPC, R has become the lingua franca for many areas of data analysis, drawing power from its high level expressiveness and its multitude of domain-specific, community-developed packages. In recent years, many efforts by the R and HPC communities have focused on bridging this gap for scaling R to the power of supercomputers. The interest in using R with supercomputing is indeed on the rise within the XSEDE community as well. The two most recent workshops on R organized by NICS and TACC have drawn hundreds of registrations and online participants. The goal of this tutorial is to provide guidance to participants on improving and scaling up existing scientific analysis workflows with R in order to best utilize the resources available through XSEDE. The tutorial consists of two major components. The morning sessions focus on helping users to develop efficient R code. The presentations include how to profile R code and best practices in writing and compiling R code for efficiency. The afternoon session will focus on approaches to scale R computations on resources that are supported by XSEDE. The presentations include using parallel packages (such as parallel, pbdR) with R, utilizing hardware accelerators (such as Xeon Phi and GPGPU) and bridging R with other big data analysis systems (such as Hadoop and Spark).


Monday July 27, 2015 8:00am - 4:30pm CDT
Majestic G

9:30am CDT

Tutorial Break
Tutorial break for tutorial presenters and attendees only

Monday July 27, 2015 9:30am - 10:00am CDT
Majestic E

12:00pm CDT

Tutorial Lunch
Tutorial lunch for tutorial presenters and attendees only

Monday July 27, 2015 12:00pm - 1:30pm CDT
Majestic E

1:30pm CDT

Tutorial: Sharing transient data and preliminary results from XSEDE resources via SeedMe platform
High performance computing (HPC) processes and workflows often include several steps for example input preparation, computation monitoring, output validation, analysis and visualizations. All these processes yield small-scale consumable information for e.g. computation progress, statistics, plots which are of high value for research team. Sharing and accessing consumable information by team members is often slow and cumbersome in current HPC environment. This tutorial will introduce attendees to the SeedMe (Stream encode explore and disseminate My experiment) platform and show how this platform can be leveraged to share consumable information rapidly with easy to use tools in ad hoc and/or automated manner. The tutorial will cover the following topics: 1. SeedMe platform overview (Lecture - 10 mins) 2. Web Browser interaction –(Hands On – 10 mins) Learn organization, navigation and usage via web browser for editing, uploading, commenting, sharing and notifying. 3. Automation Set up (Hands On - 5 mins) Learn to set up the environment for command line and programmatic interaction 4. Command line interaction (Hands On – 25 mins) Learn to use SeedMe command line tools. These tools could be used on HPC platform. 5. Programmatic interaction (Hands On – 25 mins) Learn methods available in Python client/module and use them in sample example 6. Scientific code integration (Lecture - 10 mins) Hints for integration with C/C++, Fortran and Java codes. (10 mins) 7. Limitations (1 slide) Skill Level: Level: Introductory/Beginner Pre-requisites: Basic familiarity with command line tools like changing directory and executing commands. Networking: Internet access will be required for this tutorial Requirements: 1. Computer/Laptop (Mobile devices are not sufficient for this tutorial) 2. Account on SeedMe.org (Attendees could create one at the tutorial 3. Download SeedMe tools – Either standalone or Python client/module (recommended) 4. Download sample data An abbreviated and example oriented content from SeedMe documentation will be presented at the tutorial Overview - https://www.seedme.org/documentation/overview Command line - https://www.seedme.org/cli-guide Programmatic interaction - https://www.seedme.org/documentation/programmatic-python-usage

Speakers

Monday July 27, 2015 1:30pm - 3:00pm CDT
Majestic D

1:30pm CDT

Student Tutorial: Supercomputing in Plain English, Part 2
* Lecture: MPI: The Desert Islands Analogy This session provides an intuitive, nontechnical analogy for understanding distributed parallelism (desert islands), as a precursor for understanding the MPI programming model: distributed execution, communication, message passing, independence, privacy, latency vs. bandwidth; parallel strategies (client-server, task parallelism, data parallelism, pipelining). Assumed background: 1 semester of programming in C or C++, recently; basic Unix/Linux experience, recently. * Lecture: MPI: The First 6 Routines This session gives a first introduction to distributed parallelism via MPI. Topics include: the first six MPI routines (MPI_Init, MPI_Finalize, MPI_Comm_size, MPI_Comm_rank, MPI_Send, MPI_Recv); structure of MPI calls; MPI program structure; Single Program/Multiple Data strategy; hello world; running MPI codes; compiling for MPI; rank; indeterminism; MPI data types; tags; communicators. * Lab: Running an MPI Code In this hands-on lab session, you'll get an account on one or more supercomputers, and you'll get a chance to run a job. If your Unix/Linux skills have gotten a little rusty, this will be a great refresher. * Group Coding Session: Parallelizing a Serial Code with MPI In this directed group coding session, we'll work together as a group to use MPI to parallelize a serial code. * Other topics may be introduced if time permits. Content: Older versions of the lecture slides and exercise descriptions (which will be updated) may be found as follows: * Lecture: MPI: The Desert Islands Analogy http://www.oscer.ou.edu/ncsiworkshop2012intropar_sipe_distribmem_20120730.pdf (Slides #1-14 only) * Lecture: MPI: The First 6 Routines http://www.oscer.ou.edu/MPIBasics.pdf * Lab: Running an MPI Code http://www.oscer.ou.edu/ncsiworkshop2012intropar_exercise_mpihelloworld_boomer_20120729.pdf * Group Coding Session: Parallelizing a Serial Code with MPI http://www.oscer.ou.edu/AreaUnderCurveExample.pdf Prerequisites: 1 semester of programming in C or C++, recently; basic Unix/Linux experience, recently; "Supercomputing in Plain English" morning session or equivalent. (Attendees with Fortran experience may be able to follow along, but the group coding exercise cannot be provided in both C and Fortran simultaneously.) Requirements: Laptop (Windows, MacOS or Linux) strongly recommended but not required.


Monday July 27, 2015 1:30pm - 4:30pm CDT
Majestic F

1:30pm CDT

Tutorial: Accelerating Big Data Applications with Hadoop, Spark, and Memcached on Modern HPC Clusters
The explosive growth of 'Big Data' has caused many industrial firms to adopt HPC technologies to meet the requirements of huge amount of data to be processed and stored. According to the IDC study in 2013, 67\% of high-performance computing systems were running High-Performance Data Analysis (HPDA) workloads. Apache Hadoop and Spark are increasingly being used on modern high-performance computing clusters to process HPDA workloads. Similarly, Memcached in Web 2.0 environment is becoming important for large-scale query processing.

Recent studies have shown that default Apache Hadoop, Spark, and Memcached can not leverage the features of modern high-performance computing clusters efficiently, like Remote Direct Memory Access (RDMA) enabled high-performance interconnects, high-throughput and large capacity parallel storage systems. These middleware are traditionally written with sockets and do not deliver the best performance on HPC clusters with modern high performance networks. In this tutorial, we will provide an in-depth overview of the architecture of Hadoop components (HDFS, MapReduce, RPC, HBase, etc.), Spark and Memcached. We will examine the challenges in re-designing the networking and I/O components of these middleware with modern interconnects, protocols (such as InfiniBand, iWARP, RoCE, and RSocket) with RDMA and storage architecture. Using the publicly available software packages in the High-Performance Big Data (HiBD, http://hibd.cse.ohio-state.edu) project, we will provide case studies of the new designs for several Hadoop/Spark/Memcached components and their associated benefits. Through these case studies, we will also examine the interplay between high performance interconnects, storage systems (HDD and SSD), and multi-core platforms to achieve the best solutions for these components and Big Data applications on modern HPC clusters.


Monday July 27, 2015 1:30pm - 4:30pm CDT
Majestic C

1:30pm CDT

Tutorial: Introduction to Python
This tutorial covers the Python programming language including all the information needed to participate in the XSEDE15 Modeling Day event on Tuesday. Topics covered will be variables, input/output, control structures, math libraries, and plotting libraries. To best participate, attendees should come with the Anaconda Python package downloaded and installed on their computer. You can get Anaconda Python at https://store.continuum.io/cshop/anaconda/


Monday July 27, 2015 1:30pm - 4:30pm CDT
Landmark 2

1:30pm CDT

Tutorial: Introduction to Scientific Workflow Technologies on XSEDE
This is a proposal for a joint tutorial between the ECSS workflows team, and the teams from the following workflow technologies: Swift, Makeflow/Work Queue and Pegasus. The goal is that attendees will leave the tutorial with a good understanding of the workflow related services and tools available, that they will understand how to use them on XSEDE through hands-on exercises, and that they will be able to apply this knowledge to their own workloads when using XSEDE and other computing resources.
 
One major obstacle when running workflows on XSEDE is where to run the workflow engine. Larger project and groups might have their own submit hosts, but it is common that users struggle finding a home for their workflow runs. For this reason, one effort that the ECSS workflows team have set up, based on feedback from the XSEDE14 workflow birds-of-a-feather session, is an IU Quarry hosted submit host. The host is based on as a clone of the login.xsede.org single sign on host. Thus, just like login.xsede.org, any XSEDE user with an active allocation will automatically have access. With the host, we are also assembling content for a website highlighting tested workflow systems with XSEDE specific examples that users could use for trying out the different tools. These examples will be used as a basis for the examples in the proposed tutorial’s hands-on exercises.
 
Swift is a simple language for writing parallel scripts that run many copies of ordinary programs concurrently as soon as their inputs are available, reducing the need for complex parallel programming. The same script runs on multi-core computers, clusters, grids, clouds, and supercomputers, and is thus a useful tool for moving your computations from laptop or workstation to any XSEDE resource. Swift can run a million programs, thousands at a time, launching hundreds per second. This hands-on tutorial will give participants a taste of running simple parallel scrips on XSEDE systems and provide pointers for applying it to your own scientific work.
 
Makeflow is a workflow engine for executing large complex workflows, with workflows up to thousands of tasks and hundreds of gigabytes. In this section of the tutorial, users will learn the basics of writing a Makeflow, which is based on the traditional Make construct. In the hands-on example, the users will learn to write Makeflow rules, run a makeflow locally, as well as running the tasks on XSEDE resources. The users will be introduced to Work Queue, a scalable master/worker framework, and create workers on XSEDE resources and connect them to the makeflow. The users will learn to use Work Queue to monitor workflows and the basics of debugging makeflows.
 
The Pegasus Workflow Management System sites on top of HTCondor DAGMan. In this section of the tutorial, users will learn how to create abstract workflows, and plan, execute, and monitor the resulting executable workflow. The first workflow will be run locally on the submit host, while the two other hands-on examples will be about running workflows on XSEDE resources. One workflow will include running jobs across resources, and highlights the workflow system’s data management capability in such setups. Another workflow will be about about using the pegasus-mpi-cluster tool to execute a high-throughput workload in an efficient and well-behaved manner on one of the XSEDE high performance computing resources.


Monday July 27, 2015 1:30pm - 4:30pm CDT
Majestic B

1:30pm CDT

Tutorial: Introduction to the Eclipse Parallel Tools Platform
For many HPC developers, developing, analyzing and tuning parallel scientific applications, on a variety of target platforms, involves a hodgepodge of disparate command line tools. Based on the successful open-­source Eclipse integrated development environment, the Eclipse Parallel Tools Platform (PTP) combines tools for coding, debugging, job scheduling, monitoring, error detection, tuning, revision control and more into a single tool with a streamlined graphical user interface. PTP helps manage the complexity of HPC code development, optimization and monitoring on diverse platforms. This tutorial provides a hands-­on introduction to the Eclipse Parallel Tools Platform. Early sessions introduce code development in Eclipse: editing, building, launching and monitoring parallel applications in C and Fortran, support for efficient development of code on remote machines, and developing and analyzing code with a variety of languages and libraries. Access to an XSEDE resource for hands-‐on exercises will be provided. NOTE: Bring a laptop and pre­‐install Eclipse for Parallel Application Developers (after July 1). See http://wiki.eclipse.org/PTP/tutorials/XSEDE for installation instructions.


Monday July 27, 2015 1:30pm - 4:30pm CDT
Landmark 4

1:30pm CDT

Tutorial: Introduction to the Latest Features in MPI-3
In an effort to position MPI strongly for multi-core and highly scalable systems, and to address the compelling needs of the end-users, the MPI forum released the major extension to MPI in the form of Version 3.0 in September 2012. This latest version of MPI, referred to as MPI-3, includes several new features like nonblocking collectives, new one-sided communication operations, and Fortran 2008 bindings. Unlike MPI-2.2, this standard is considered a major update to the MPI standard. This half-day tutorial will include discussions and hands-on sessions on the following topics: a general overview of the main features added to MPI and a detailed overview of one-sided communication or RMA, nonblocking collectives, and version detection. This tutorial is meant for intermediate to advanced MPI programmers who are interested in learning about some of the latest additions to the widely used MPI standard in order to increase the performance of their applications and to reduce energy consumption.
 
The tutorial will be driven through examples and hence, the audience of this tutorial should have C/Fortran programming skills to take advantage of the material presented and to participate in the hands-on session. In the hands-on session, the audience will be given four exercises in a timeperiod of one hour. They will be provided with the skeleton programs written in C/Fortran and the instructions to modify the programs to achieve the objectives of the exercises. The skeleton programs for the hands-on session will include comments/placeholders to guide the audience in modifying the code. The hands-on session will help the audience to test the knowledge gained during the tutorial. By the end of the tutorial, the audience would have learnt how to use one-sided communication, non-blocking collectives, and be able to conditionally compile or run with MPI3 specific features while maintaining legacy support for MPI-2 and earlier versions of the API.
 
Because this tutorial will include a hands-on session, the audience will be provided access to Stampede, a 10 PFLOPS Dell Linux Cluster at TACC, to carry out the exercises (a local installation of mpich-3.x or openmpi-1.7.5 or later on a Linux, mac, or Windows/Cygwin system will also suffice). The audience will need their own laptops to login to Stampede remotely via SSH. Hence the SSH client or terminal access should be available on the laptops to be used during the tutorial.


Monday July 27, 2015 1:30pm - 4:30pm CDT
Landmark 6

1:30pm CDT

Tutorial: Introduction to Using Comet
Comet is a new XSEDE HPC resource hosted and operated at SDSC. It features next-generation AVX2-capable Intel processors, Mellanox FDR InfiniBand interconnects, NVIDIA GPUs on part of the system, and Aeon high performance storage. The standard compute nodes consist of Intel Xeon E5-2680v3 (formerly codenamed Haswell) processors, 128 GB DDR4 DRAM (64 GB per socket), and 320 GB of SSD based local scratch. The GPU nodes contain four NVIDIA GPUs each. This tutorial introduces the capabilities of Comet, and has hands on material to cover the different modes of usage anticipated for the machine.
 
We begin the tutorial with detailed information on the Comet system architecture, including compute and GPU nodes, InfiniBand topology, performance storage, and virtualization. Performance comparisons between Comet and Gordon are provided for selected applications. We then follow with information on accessing Comet, transferring data to/from Comet, and the software environment.
 
The hands on section of the tutorial is divided into three sections – 1) Using the compute nodes, including examples on MPI, OpenMP, hybrid (e.g. MPI+ OpenMP), local scratch, and Hadoop jobs; 2) Using the GPU nodes with examples on using MVAPICH2-GDR (uses GPU Direct), examples CUDA, and OpenACC programming, and 3) special features on Comet including high performance virtualization, and gateway hosting options.
 
SDSC Staff will be available to meet with individual users, to further discuss usage of Comet, at the conclusion of the tutorial.

Speakers
RS

Robert Sinkovits

Director Scientific Computing Applications, San Diego Supercomputer Center
I get excited about all things science and computing. Started off as a physicist, but have since collaborated with immunologists, ecologists, chemists, structural biologists, economists, astronomers, engineers, social scientists, mathematicians and others. Passionate about education... Read More →


Monday July 27, 2015 1:30pm - 4:30pm CDT
Majestic A

1:30pm CDT

Tutorial: Managing Data Throughout the Research Lifecycle Using Globus Software-as-a-Service
Over the past four years, Globus has become a preferred service for moving and sharing research data on XSEDE and campus computing resources. With the recent release of data publication and discovery capabilities, Globus now provides useful tools for managing data at every stage of the research lifecycle. While usage across the XSEDE ecosystem continues to increase, there are many institutions and investigators who are either not aware of the capabilities and benefits Globus can provide, or have limited-scope deployments that they would like to expand.
 
In this session, participants will learn about the features of the Globus service, and how to use it for delivering robust research data management services that span campus systems, national cyberinfrastructure, and public cloud resources. Globus is installed at most XSEDE resource providers, and we will draw on experiences from research computing centers (e.g. Michigan, Purdue, Colorado) and HPC facilities (e.g. ALCF, SDSC) to highlight the challenges such facilities face in delivering scalable research data management services. Attendees will be introduced to Globus and will have multiple opportunities for hands-on interaction with the service.
 
The tutorial will help participants answer these questions: How can Globus help researchers overcome the challenges they face in managing increasingly "big" data? What services can a campus computing facility offer to researchers for managing large datasets more efficiently? How can such services be integrated into existing campus computing infrastructure? How should such services be delivered to minimize the impact on IT infrastructure? What issues are typically encountered (e.g. security) and how should they be addressed?


Monday July 27, 2015 1:30pm - 4:30pm CDT
Landmark 1

1:30pm CDT

Tutorial: Software-Defined Visualization: Data Analysis for Current and Future Cyberinfrastructure
The design emphasis for supercomputing systems has moved from raw performance to performance-per-watt, and as a result, supercomputing architectures are converging on processor with wide vector units and many processing cores per chip. Such processors are capable of performant image rendering purely in software. This improved capability is fortuitous, since the prevailing homogeneous system designs lack dedicated, hardware-accelerated rendering subsystems for use in data visualization. Reliance on this ”software-defined” rendering capability will grow in importance since, due to growing data sizes, visualizations must be performed on the same machine where the data is produced. Further, as data sizes outgrow disk I/O capacity, visualization will be increasingly incorporated into the simulation code itself (in situ visualization).
 
Fortunately, a number of recently-developed software packages capture the improved rendering performance of general-purpose processors and expose that performance through common visualization toolkits. Among these packages are those that render via rasterization, the algorithm used in hardware accelerators, as well as those that render via ray tracing, an algorithm that physically simulates the travel of light. To produce insightful visualizations most efficiently, the visualization scientist should understand the capabilities, performance characteristics and precision limits of each rendering method.
 
In this half-day introductory tutorial, we present a primer on rasterization and ray tracing and an overview of (mostly) open-source software packages available to the open-science community, as well as hands-on experience with the fundamental techniques. We begin with a brief background of terms and concepts to ensure that all participants have a working knowledge of the material covered in the remainder of the tutorial. We then motivate the concepts through three application lightning talks that demonstrate the use of rasterization and ray tracing in actual domain applications. Finally, participants will apply the concepts in guided hands-on visualization labs using the TACC XSEDE resources Stampede and Maverick.


Monday July 27, 2015 1:30pm - 4:30pm CDT
Landmark 5

1:30pm CDT

Tutorial: XCBC using LittleFe and the Limulus HPC200
For this tutorial, we wish to demonstrate building practical XSEDE Compatible Basic Clusters (XCBC) while showing that an XCBC need not be an expensive resource to be useful. One secondary goal is showing the basic process of installing Rocks on a “bare metal” system from start to finish, ultimately running a basic multi node job that will demonstrate the abilities of such a system. The next secondary goal is to show using Yum and RPMs developed by XSEDE to create an XCBC. Also, we want to demonstrate the value of using a small local cluster to pilot and prototype research in order to better prepare for requesting allocations on national XSEDE resources. An additional goal is to inspire those that may have limited system administration resources and/or limited budgets into finding ways to develop clusters to meet their needs for experimentation or for smaller research jobs that might not otherwise get time on XSEDE systems. Lastly, we will cover the basics of management for small clusters, getting started and managing users effectively.


Monday July 27, 2015 1:30pm - 4:30pm CDT
Landmark 3

1:30pm CDT

Tutorial: XSEDE New User Tutorial: Using Science Gateways
The purpose of this tutorial is to supplement the standard XSEDE new user tutorial with overviews of how to use science gateways so that the new users can start using XSEDE for scientific research right away, at the conference, and continue at their home institution, without getting bogged down in the allocation process at the beginning. The target audience members are scientists in particular domains (chemistry, neuroscience, atmospheric science) who are new to XSEDE and who optionally are familiar with common software packages in their field, but who do not have deep experience with using supercomputers and clusters. The tutorial will provide a brief overview of XSEDE and the gateway program, including a list of other available gateways not covered by the tutorial. The bulk of the tutorial will be a sequence of hands-on activities for domain specific gateways. The tutorial organizers will work with XSEDE conference organizers and the outreach team to recruit new user attendees from the selected domains.
 
Attendees will not need to be researchers in a specific science domain to participate in a specific gateway exercise. The organizers will provide all required input files and data for the exercises. Attendees will be encouraged to work with gateway providers on their specific research problems. Each hands on session will demonstrate how to create an account on the science gateway; how to create, submit, and monitor a submission of a specific application code; and how to retrieve final results. Each session will also include a feedback opportunity to help new users understand the additional capabilities of each gateway and optionally try out attendee-provided input data.


Monday July 27, 2015 1:30pm - 4:30pm CDT
Landmark 7

2:45pm CDT

Tutorial Break
Tutorial break for tutorial presenters and attendees only

Monday July 27, 2015 2:45pm - 3:15pm CDT
Majestic E

3:00pm CDT

Tutorial: Efficient Data Analysis with the IPython Notebook
There are many recent additions to Python that make it an excellent programming language for data analysis. This tutorial has two goals. First, we introduce several ways to run the IPython notebook on remote XSEDE systems for data analysis. We will provide hands-on exercises for manipulating data using pandas and matplotlib. Participants in those tutorials will be able to execute examples using the IPython notebook and learn a powerful, new way to approach scientific workflows for data analysis on HPC system. Second we will provide examples on how to use pyspark and the Python notebook on Gordon for the analysis of large data sets.


Monday July 27, 2015 3:00pm - 4:30pm CDT
Majestic D

5:00pm CDT

BOF: Clouds are forming: XSEDE cloud capabilities and futures discussion BOF
Cloud computing has moved beyond computer science projects and into production research computing environments. Cloud is now a major technology in enterprise use in industry, in academia as well as in the national cyberinfrastructure (CI). Multiple XSEDE service providers (SP) have awards that include cloud or cloud like technologies at their sites. These new deployments are unlike any current XSEDE offerings and thus may require new and coordinated XSEDE services and capabilities to be offered in the future. In order to better serve the community we will want to spur discussion at this BOF to identify any common needs related to cloud computing for XSEDE in operations, security, and researchers usability. One such example would be the discussion of a XSEDE cloud image repository and how it may be beneficial to all XSEDE participants and stakeholders. Other topics could include security implications, image vetting, federation, scheduling, software defined networking (SDN), and participant cloud experiences.
 
This BOF aims to provide a forum for recent NSF awardees with cloud capabilities to describe their environments and to discuss and identify topics for future deeper dives that would serve the XSEDE community regarding future services.


Monday July 27, 2015 5:00pm - 6:00pm CDT
Majestic A

5:00pm CDT

BOF: Configuration Management for HPC Clusters
Are you using configuration management for your cluster? You should be. A well orchestrated HPC install is a thing of beauty. Come join us to talk shop about automating deployments.

Speakers

Monday July 27, 2015 5:00pm - 6:00pm CDT
Majestic B

5:00pm CDT

BOF: Gateways for Humanities, Arts, and Social Science
This BOF is for anyone interested in high performance computing for Humanities, Arts, or Social Science. Science gateways offer a low barrier of entry to high performance computing. XSEDE is working with humanities scholars to create gateways to allow others to easily use a variety of tools with their own data. There are currently three gateways in development for humanities, arts, and social science and this BOF will provide an update and a chance for discussion, input, questions, etc. We will have project leaders and technical developers in the room to participate in the discussion and we’d love to hear your thoughts, too. The gateways currently under development include:
 
• Text Analysis
•Video / Image Analysis
• Network Analysis.
 
Topics we will discuss include (but are not limited to):
 
• Current Status of the Gateways
•Use Cases for Gateways
• Potential Data Sources
• Anything the group would like to discuss.
 
Regardless of your level of experience, we'd like to invite you to join in the discussion! If you have any questions or suggestions, please send them to: Alan Craig XSEDE HASS Specialist acraig@ncsa.uiuc.edu

Speakers

Monday July 27, 2015 5:00pm - 6:00pm CDT
Majestic C

5:00pm CDT

BOF: Open Session with the User Advisory Committee
Come to express your thoughts, concerns, wishes or successes with your representatives to XSEDE management. Emre Brookes, Chair of the User Advisory Commiittee (UAC), along with other members of the UAC will be available for an open conversation with those making use of XSEDE resource and services. This session will be without XSEDE project representatives to allow for an open discussion of any issues or concerns amongst the community. The UAC consists of members of the national community who represent the needs and requirements of the research and education community and, in particular, addresses the concerns of the day-to-day users of the XSEDE environment and provides guidance with respect to how the activities and plans of XSEDE can better serve those needs. The committee represents the "researcher's voice" to XSEDE management, presenting recommendations regarding emerging needs and services and acting as a sounding board for plans and suggested developments in the XSEDE environment and services.

Speakers

Monday July 27, 2015 5:00pm - 6:00pm CDT
Majestic F

6:00pm CDT

Welcome Networking Meeting
Monday July 27, 2015 6:00pm - 8:00pm CDT
Crystal Ballroom
 
Tuesday, July 28
 

7:00am CDT

Breakfast sponsored by Cray
Tuesday July 28, 2015 7:00am - 8:00am CDT
Landmark 1-7

7:00am CDT

Registration
Registration

Tuesday July 28, 2015 7:00am - 4:30pm CDT
Landmark Registration Office

8:00am CDT

Plenary: Cyberinfrastructure: An NSF Update and Reflections on Architecture, Reference Models and Community

Cyberinfrastructure is critical to accelerating discovery and innovation across all disciplines.  In order to support these advances, the National Science Foundation (NSF) supports a dynamic cyberinfrastructure ecosystem composed of multiple resources including data, software, networks, high-end computing, and people.  I will discuss NSF’s strategy to ensure that researchers across the U.S. have access to a diversity of these resources to continue our nation’s ability to be the discovery and innovation engine of the world.  I will also provide an update on cyberinfrastructure activities within NSF, and reflect on the importance of layered CI architectures and reference models for accelerating the pace of scientific discovery.


Speakers

Tuesday July 28, 2015 8:00am - 8:45am CDT
Majestic D&E

8:00am CDT

Student Modeling Day
Work with a team to build a model of a scientific phenomenon to understand its behavior and simulate a range of conditions. Team members will be challenged to understand the underlying model and the simplifications it makes concerning the real system, to write and test the computer codes that are needed to solve the problem, and to add additional sophistication to the models to relax some of the assumptions, improve the user interface to the code, and validate the code against real data or more sophisticated models.
 The coding will be undertaken using Python, and XSEDE15 has a half-day Python tutorial designed to prepare you for Modeling Day. If you have a team, you can select your teammates when you register. Individuals without a team will be assigned to a team with a mix of backgrounds in science and computer science.
 To sign up for the challenge go to:
 https://www.surveymonkey.com/r/XSEDE15ModelingDay

Tuesday July 28, 2015 8:00am - 5:00pm CDT
Statler Ballroom

8:45am CDT

Plenary: Extreme Data Management Analysis and Visualization: Exploiting Large Data for Science Discovery
Effective use of data management techniques for massive scientific data are a crucial ingredient for the success of any supercomputing center and cyberinfrastructure for data intensive scientific investigation. Developing such techniques involves a number of major challenges such as the real-time management of large models, or the quantitative analysis of scientific features of unprecedented complexity. The Center for Extreme Data Management Analysis and Visualization (CEDMAV) addresses these challenges with and interdisciplinary research in diverse topics including the mathematical foundations of data representations, the design of robust, efficient algorithms, and the integration with relevant applications in physics, biology, or medicine.
 
In this talk, I will discuss one approach developed for dealing with massive amount of information via a framework for processing large scale scientific data with high performance selective queries on multiple terabytes of raw data. The combination of this data model with progressive streaming techniques allows achieving interactive processing rates on a variety of computing devices ranging from handheld devices like an iPhone, to simple workstations, to the I/O of parallel supercomputers. With this framework we demonstrated for example how one can enable the real time streaming of massive combustion simulations from DOE platforms at ORNL, LBNL and ANL.
 
I will also present the application of a discrete topological framework for the representation and analysis of the same large scale scientific data. Due to the combinatorial nature of this framework, we can implement the core constructs of Morse theory without the approximations and instabilities of classical numerical techniques. The inherent robustness of the combinatorial algorithms allows us to address the high complexity of the feature extraction problem for high resolution scientific data and achieve its deployment in-situ.
 
During the talk, I will provide a live demonstration of the effectiveness of some software tools developed in CEDMAV and discuss the deployment strategies in an increasing heterogeneous computing environment.

Speakers

Tuesday July 28, 2015 8:45am - 9:30am CDT
Majestic D&E

9:30am CDT

Break
Tuesday July 28, 2015 9:30am - 10:00am CDT
Landmark 1-7

10:00am CDT

Cyberinfrastructure resources enabling creation of the loblolly pine reference transcriptome
Today’s genomics technologies generate more sequence data than ever before possible, and at substantially lower costs, serving researchers across biological disciplines in transformative ways. Building transcriptome assemblies from RNA sequencing reads is one application of next-generation sequencing (NGS) that has held a central role in biological discovery in both model and non-model organisms, with and without whole genome sequence references. A major limitation in effective building of transcriptome references is no longer the sequencing data generation itself, but the computing infrastructure and expertise needed to assemble, analyze and manage the data. Here we describe a currently available resource dedicated to achieving such goals, and its use for extensive RNA assembly of up to 1.3 billion reads representing the massive transcriptome of loblolly pine, using four major assembly software installations. The Mason cluster, an XSEDE second tier resource at Indiana University, provides the necessary fast CPU cycles, large memory, and high I/O throughput for conducting large-scale genomics research. The National Center for Genome Analysis Support, or NCGAS, provides technical support in using HPC systems, bioinformatic support for determining the appropriate method to analyze a given dataset, and practical assistance in running computations. We demonstrate that a sufficient supercomputing resource and good workflow design are elements that are essential to large eukaryotic genomics and transcriptomics projects such as the complex transcriptome of loblolly pine, gene expression data that inform annotation and functional interpretation of the largest genome sequence reference to date.


Tuesday July 28, 2015 10:00am - 10:30am CDT
Majestic C

10:00am CDT

Incorporating Interactive Compute Environments into Web-Based Training Materials using the Cornell Job Runner Service
Online training materials, such as the Cornell Virtual WorkshopSM have many advantages, the foremost being that they are always available as a 24x7 option for learners who want to study a topic on demand and at their own pace. It can be challenging to create online materials that are engaging and provide a realistic learning environment. Traditionally, training materials and compute environments have been separate entities. Even in the HPC environment, students learn from online materials in one window, then log into a new machine or session to try out new skills or concepts. Accessing this second environment can impose obstacles such as gaining access to the appropriate computer and learning to navigate a computer-specific login environment and file system. In an effort to circumvent these obstacles, the Cornell University Center for Advanced Computing (CAC) developed the Cornell Job Runner ServiceSM (CJRS), along with a general-purpose toolkit for using the CJRS to embed a computing environment directly into web pages, backed by real or virtual compute resources. This implementation provides the learner immediate access to a compute environment that looks and feels like a typical HPC login node or batch job, allowing incorporation of on-demand learning experiences interspersed with general training content. With CJRS, students can try out commands and run jobs without obtaining an account or leaving the learning environment to sign in to a remote machine. This paper explores the use of the CJRS toolkit to provide three different interactive modes for learners: a Linux console configured as a general login node, a form element that launches a pre-defined SLURM job, and a guided session which allows the user to walk through pre-planned steps of compiling, fixing, and running MPI code.


Tuesday July 28, 2015 10:00am - 10:30am CDT
Majestic A

10:00am CDT

Publishing and Consuming GLUE v2.0 Resource Information in XSEDE
XSEDE users, science gateways, and services need a variety of accurate information about XSEDE resources so that they can use those resources effectively. They need information to decide which resources to use, to track their usage of resources, and to provide a variety of services to their users. To provide this information, XSEDE is deploying new software and services to gather and publish static and dynamic resource information. This paper describes the software XSEDE uses to gather resource information, the GLUE v2.0 schema used to format that information, the messaging system used to quickly distribute information, and provides several examples of using this resource information.


Tuesday July 28, 2015 10:00am - 10:30am CDT
Majestic F

10:00am CDT

Intel Tutorial Part 1: Intel-Colfax MIC Parallel Programming Training CDT 101
Colfax Developer Training (CDT) is an in-depth intensive course on efficient parallel programming of Intel Xeon family processors and Intel Xeon Phi coprocessors. The 1-day seminar (CDT 101) features presentations on the available programming models and best optimization practices for the Intel many-core platform, and on the usage of the Intel software development and diagnostic tools. CDT 101 is a pre-requisite for hands-on labs, CDT 102.
 
Intel Xeon Phi coprocessors, featuring the Intel Many Integrated Core (MIC) architecture, are novel many-core computing accelerators for highly parallel applications, capable of delivering greater performance per system and per watt than general-purpose CPUs. Unlike GPGPUs, they support traditional HPC programming frameworks, including OpenMP and MPI, and require the same optimization methods as multi-core CPUs.
 
Lecture Session
• MIC architecture: purpose, organization, pre-requisites for good performance, future technology.
• Programming models: native, offload, heterogeneous clustering.
• Parallel frameworks: automatic vectorization, OpenMP, MPI.
• Optimization Methods: general, scalar math, vectorization, multithreading, memory access, communication and special topics.

Speakers

Tuesday July 28, 2015 10:00am - 12:00pm CDT
Majestic G

10:30am CDT

NCBI-BLAST programs optimization on XSEDE resources for sustainable aquaculture
The development of genomic resources of non-model organisms is now becoming commonplace as the cost of sequencing continues to decrease. The Genome Informatics Facility in collaboration with the Southwest Fisheries Science Center (SWFSC), NOAA is creating these resources for sustainable aquaculture in Seriola lalandi. Gene prediction and annotation are common steps in the pipeline to generate genomic resources, which are computationally intense and time consuming. In our steps to create genomic resources for Seriola lalandi, we found BLAST to be one of our most rate limiting steps. Therefore, we took advantage of our XSEDE Extended Collaborative Support Services (ECSS) to reduce the amount of time required to process our transcriptome data by 300 percent. In this paper, we describe an optimized method for the BLAST tool on the Stampede cluster, which works with any existing datasets or database, without any modification. At modest core counts, our results are similar to the MPI-enabled BLAST algorithm (mpiBLAST), but also allow the much needed and improved flexibility of output formats that the latest versions of BLAST provide. Reducing this time-consuming bottleneck in BLAST will be broadly applicable to the annotation of large sequencing datasets for any organism


Tuesday July 28, 2015 10:30am - 11:00am CDT
Majestic C

10:30am CDT

Extending Access to HPC Skills Through a Blended Online Course
Extending expertise in parallel computing is critical to all those using high performance computing to gain insights into science and engineering problems. Many campuses do not offer such a course because of course load limits, a lack of faculty expertise, and/or lack of access to appropriate computing resources. MOOCs for this type of course are difficult to scale both because of constraints on computing resources as well as the need for individual attention with programming problems. Using a blended online course with collaborating faculty that offer academic credit for their students, we have been able to facilitate course participation at many institutions that might not otherwise have covered the topic. This has had a significant benefit for both the faculty and students. Our paper summarizes the nature of these impacts and offers some insights on best practices for extending technical courses to multiple institutions.


Tuesday July 28, 2015 10:30am - 11:00am CDT
Majestic A

10:30am CDT

Using Data Science to Understand Tape-Based Archive Workloads
Data storage needs continue to grow in most fields, and the cost per byte for tape remains lower than the cost for disk, making tape storage a good candidate for cost-effective long-term storage. However, the workloads suitable for tape archives differ from those for disk file systems, and archives must handle internally generated workloads that can be more demanding than those generated by end users (e.g., migration of data from an old tape technology to a new one). To better understand the variegated workloads, we have followed the first steps in the data science methodology. For anyone considering the use or deployment of a tape-based data archive or for anyone interested in details of data archives in the context of data science, this paper describes key aspects of data archive workloads.


Tuesday July 28, 2015 10:30am - 11:00am CDT
Majestic F

11:00am CDT

FlowGate: Towards Extensible and Scalable Web-Based Flow Cytometry Data Analysis
Recent advances in cytometry instrumentation are enabling the generation of "big data" at the single cell level for the identification of cell-based biomarkers, which will fundamentally change the current paradigm of diagnosis and personalized treatment of immune system disorders, cancers, and blood diseases. However, traditional flow cytometry (FCM) data analysis based on manual gating cannot effectively scale to address this new level of data generation. Computational data analysis methods have recently been developed to cope with the increasing data volume and dimensionality generated from FCM experiments. Making these computational methods easily accessible to clinicians and experimentalists is one of the biggest challenges that algorithm developers and bioinformaticians need to address. This paper describes FlowGate, a novel prototype cyberinfrastructure for web-based FCM data analysis, which for the first time integrates graphical user interfaces (GUI), workflow engines, and parallel computing resources together for extensible and scalable FCM data analysis. The goal of FlowGate is to allow users to easily access state-of-the-art FCM computational methods developed using different programming languages and software on the same platform, when the implementations of these methods follow standardized I/O. By adopting existing data and information standards, FlowGate can also be integrated as the back-end data analytical platform with existing immunology and FCM databases. Experimental runs of two representative FCM data analytical methods in FlowGate on different cluster computers demonstrated that the task runtime can be reduced linearly with the number of compute cores used in the analysis.

Speakers
RS

Robert Sinkovits

Director Scientific Computing Applications, San Diego Supercomputer Center
I get excited about all things science and computing. Started off as a physicist, but have since collaborated with immunologists, ecologists, chemists, structural biologists, economists, astronomers, engineers, social scientists, mathematicians and others. Passionate about education... Read More →


Tuesday July 28, 2015 11:00am - 11:30am CDT
Majestic C

11:00am CDT

Multidisciplinary research and education with open tools: Metagenomic analysis of 16S rRNA using Arduino, Android, Mothur and XSED
Modern scientific research is often multidisciplinary, involving scientists from two or more different backgrounds. This is true of work done in areas as diverse as genomics and climate change. Many of the projects undertaken in these areas involve a combination of field, lab, and computational components. This project demonstrates how the principles of active learning, performing tasks and thinking about the tasks being performed (Bonwell and Eison, 1991), can be applied to multidisciplinary undergraduate science education and outreach in the context of field to analysis using 16S rRNA metagenomics. Beginning with the development of the scientific questions, students work through the entire process of developing, testing and implementing sampling protocols, hardware/software platforms for geocoded sample and metadata collection, and lab protocols; they work in the field taking samples and in the lab preparing them; they perform the computational analysis of the sequencer output and synthesis of metadata and genomic data; and they disseminate the results. The students primarily come from computer science, biology, geology, and physics. This broad range makes it possible to select groups that cover a wide range of traditionally underrepresented groups. By working together for a year or more the students learn about the range of science involved, the vocabulary and experimental methods, and how their discipline contributes to answering a large, complex question.


Tuesday July 28, 2015 11:00am - 11:30am CDT
Majestic A

11:00am CDT

Storage Utilization in the Long Tail of Science
The increasing expansion of computations in non-traditional domain sciences has resulted in an increasing demand for research cyberinfrastructure that is suitable for small- and mid-scale job size. The computational aspects of these emerging communities are coming into focus and being addressed through the deployment of several new XSEDE resources that feature easy on-ramps, customizable software environments through virtualization, and interconnects optimized for jobs that only use hundreds or thousands of cores; however, the data storage requirements for these emerging communities remains much less well characterized.
 
To this end, we examined the distribution of file sizes on two of the Lustre file systems within the Data Oasis storage system at the San Diego Supercomputer Center. We found that there is a very strong preference for small files among SDSC's users, with 90% of all files being less than 2 MB in size. Furthermore, 50% of all file system capacity is consumed by files under 2 GB in size, and these distributions are consistent on both scratch and projects storage file systems. Because parallel file systems like Lustre and GPFS are optimized for parallel I/O to large, wide-stripe files, these findings suggest that parallel file systems may not be the most suitable storage solutions when designing cyberinfrastructure to meet the needs of emerging communities.


Tuesday July 28, 2015 11:00am - 11:30am CDT
Majestic F

11:30am CDT

Using Docking and Molecular dynamics to explore GAGs-protein interactions
My research uses docking and molecular dynamics to explore glycosaminoglycans(GAGs)-protein interactions. This project will be established to primarily address the computational requirements of the PEG and secondarily to serve the needs of the wider glycoscience community. The primary objectives of my project will be: 1) to perform computational analyses related to the glycoscience, which will include the Combinatorial Virtual Library Screening (CVLS) experiments and molecular dynamics; 2) to set up, maintain and periodically update the hardware and software required for such analyses, especially GPU calculation; 3) to develop advanced computational tools and protocols for better understanding of GAG-protein interactions. The Computational calculation will support the Chemistry and Biology of Heparan Sulfate project on all aspects of computational experimentation. It will utilize computational chemistry and biology in the design of heparan sulfate molecules that are potentially useful in the treatment of thrombotic and inflammatory disorders.

Speakers

Tuesday July 28, 2015 11:30am - 12:00pm CDT
Majestic C

11:30am CDT

ISLET: An Isolated, Scalable, & Lightweight Environment for Training
In this paper we present ISLET; the Isolated, Scalable, & Lightweight Environment for Training. ISLET overcomes many of the distribution, scaling and security challenges of providing mass training to students requiring an interactive GNU/Linux command-line environment. This Docker-based solution is evaluated and lessons learned from real world experience with ISLET are discussed.


Tuesday July 28, 2015 11:30am - 12:00pm CDT
Majestic A

12:00pm CDT

Lunch sponsored by Dell
Tuesday July 28, 2015 12:00pm - 1:00pm CDT
Landmark 1-7

1:00pm CDT

Lightning Talks
• Using Mozilla Badges to Certify XSEDE Users and Promote Training. Sandra Kappes and Vincent Betro
• Extending Access to HPC Skills Through a Blended Online Course. Steven Gordon, James Demmel, Lizanne Destefano and Lorna Rivera
• In-Core Volume Rendering for Cartesian Grid Fluid Dynamics Simulations. Ted Wetherbee, Elizabeth Jones, Michael Knox, Stou Sandalski and Paul Woodward
• Overview of XSEDE-PRACE Collaborative Projects in 2014. Tabitha Samuel, Shunzhou Wan, Peter Coveney, Morris Riedel, Shahbaz Memon, Sandra Gesing and Nancy Wilkins-Diehr
• Accelerating Our Understanding of Supernova Explosion Mechanisms via Simulations and Visualizations with GenASiS. Reuben Budiardja, Christian Cardall and Eirik Endeve
• XSEDE value added, cost avoidance, and return on investment. Craig Stewart, Ralph Roskies, Richard Knepper, Richard Moore, Justin Whitt and Timothy Cockerill
• Paleoscape Model of Coastal South Africa During Modern Human Origins: Progress in Scaling and Coupling Climate, Vegetation, and Agent-based Models on XSEDE. Eric Shook, Colin Wren, Curtis Marean, Alastair Potts, Janet Franklin, Francois Engelbrecht, David O'Neal, Marco Janssen, Erich Fisher, Kim Hill, Karen Esler and Richard Cowling
• GeoHashViz: Interactive Analytics for Mapping Spatiotemporal Diffusion of Twitter Hashtags. Kiumars Soltani, Aditya Parameswaran and Shaowen Wang


Tuesday July 28, 2015 1:00pm - 2:30pm CDT
Majestic D&E

2:30pm CDT

Break sponsored by NICS
Tuesday July 28, 2015 2:30pm - 3:00pm CDT
Landmark 1-7

3:00pm CDT

A Scalable Computational Approach to Political Redistricting Optimization
We present the experience of developing a scalable computational approach to political redistricting optimization by enhancing a parallel genetic algorithm library on XSEDE and Blue Waters.


Tuesday July 28, 2015 3:00pm - 3:30pm CDT
Majestic C

3:00pm CDT

Bring the NLACE Model Online Using XSEDE and HUBzero
In this paper, we describe a “gateway as a service” approach to help researchers bring their applications online quickly and make them available for the broad user community. This approach builds on the HUBzero technology and leverages the XSEDE HPC resources. It enables individual scientists and small research groups to quickly develop, deploy, and share their applications online without having to learn web programming and the XSEDE system level software stacks. The cost of such approach for a small team would be significantly lower than building a science gateway from scratch and sustaining it in the long run. The applicability of this approach was demonstrated via an XSEDE Extended Collaborative Support Service (ECSS) project we have recently completed for developing a biomechanical imaging (BMI) gateway. Our team consists of ECSS staff from Purdue University and biomechanical imaging researchers from Boston University and Rensselaer Polytechnic Institute. In the ECSS work, we were able to quickly develop an online tool for composing and executing NLACE model simulations on Gordon and visualizing the model output interactively using ParaView. The online tool runs on DiaGrid Hub (powered by HUBzero) with comprehensive support for job and workflow submission, tickets, content management, discussion forum, wiki, project, group, documentation, rating, citation, and usage tracking.


Tuesday July 28, 2015 3:00pm - 3:30pm CDT
Majestic B

3:00pm CDT

BOF: Dell Sponsored Session: Cluster Performance Optimization
Optimizing your hardware and applications is critical for getting the best performance for researchers. Join industry and academic experts as they discuss hardware and application optimization topics.
 
Topics include:
•Optimizing hardware and software for increased performance
•Application modernization on Xeon Phi and Haswell processors
•BIOS optimization and effects on applications
•Increase NFS and Lustre storage performance

Speakers

Tuesday July 28, 2015 3:00pm - 4:00pm CDT
Majestic F

3:00pm CDT

BOF: Development of a Next-Generation, Interoperable, Federated Cyberinfrastructure
Federation has become an important tool in building cyberinfrastructure (CI). Multiple federated systems have grown up via parallel evolution in the GENI, Cloud and Grid communities. As distributed computing, high-performance computing, and advanced networking are brought together to solve next generation problems it is important to identify cross-community common supporting elements and technologies for federation. A workshop was held in Washington, DC that brought together a diverse group of networking and CI community leaders to discuss development of next-generation, interoperable, federated CI. The results of the workshop were presentations from several of the community leaders, discussion of the issues, and the formulation of a workshop report that detailed technology gaps and items that show promise for a next-generation, interoperable, federated CI. A summary of the workshop presentations, the break out groups, and the workshop report will be presented to spur discussion among XSEDE stakeholders. Users of advanced cyberinfrastructure, XSEDE Level 1, 2 and 3 Service Providers, and representatives from CI providers are especially encouraged to attend this workshop. The Workshop PI and representatives from the NSFCloud community are expected to attend this BOF.
 
BOF organizers would like to capture consensus items from the workshop report and any additional items that perhaps should be included in a supplemental report on future CI.


Tuesday July 28, 2015 3:00pm - 4:00pm CDT
Majestic H

3:00pm CDT

BOF: The Good, Bad and Ugly of eTextBooks and eCourses
Although good pedagogy should be the overriding concern in teaching, the strongest feelings appear to be evoked by the digital delivery tools used in teaching, such as online lectures and courses (especially MOOCS), eTextBooks and simulation software. This BOF aims to have students and teachers discuss some of their experiences with the use of digital teaching tools, with a special focus on the still-developing eTextBooks. This focus is needed because the broad subject of the effectiveness and economies of online education is a 'Grand Challenge' question on which entire conferences are held.

Moderators
Tuesday July 28, 2015 3:00pm - 4:00pm CDT
Majestic A

3:00pm CDT

Intel Tutorial Part II: Intel-Colfax MIC Parallel Programming, Hands-on Lab CDT 102
Colfax Developer Training (CDT) is an in-depth intensive course on efficient parallel programming of Intel Xeon family processors and Intel Xeon Phi coprocessors. The 1-day labs course (CDT 102) features hands-on exercises on the available programming models and best optimization practices for the Intel many-core platform, and on the usage of the Intel software development and diagnostic tools. The pre-requisite for this class is the one-day seminar CDT 101.
 
Intel Xeon Phi coprocessors, featuring the Intel Many Integrated Core (MIC) architecture, are novel many-core computing accelerators for highly parallel applications, capable of delivering greater performance per system and per watt than general-purpose CPUs. Unlike GPGPUs, they support traditional HPC programming frameworks, including OpenMP and MPI, and require the same software optimization methods as multi-core CPUs.
 
Hands-on Session
• Offload and Native: “Hello World” to complex; using MPI.
• Performance Analysis: VTune.
• Case Study: all aspects of tuning in the N-body calculation.
• Optimization I: strip-mining for vectorization, parallel reduction.
• Optimization II: loop tiling, thread affinity.

Speakers

Tuesday July 28, 2015 3:00pm - 5:30pm CDT
Majestic G

3:30pm CDT

Performance Examinations of Multiple Time-Stepping Algorithms on Stampede Supercomputer
Our examinations of the methodical implementation of the multiple time-stepping algorithms on the Stampede supercomputer reveal a speedup factor of 23 over single time-stepping algorithm, for the same problem, with the combined algorithmic and hardware accelerations. More specifically, the MTS algorithm is 11.5 times faster than the STS algorithm; the GPU-enabled system performs 2 times faster than the CPU-only system. Combing these speedups and using MTS algorithm on the Stampede with 16 GPU nodes, we can simulate 1-ms multi-scale phenomena of flowing platelets in blood vessels within approximate 37 days, enabling practical modeling of millisecond-scale biological phenomena with spatial resolutions at the nanoscales. The mathematical algorithms and the advances of computer hardware, such as the Stampede supercomputer, that can be leveraged allow us to explore the new frontiers of cutting-edge applications in medical and life sciences.


Tuesday July 28, 2015 3:30pm - 4:00pm CDT
Majestic C

3:30pm CDT

The CIPRES Workbench: A Flexible Framework for Creating Science Gateways
Here we describe the CIPRES Workbench (CW), an open source software framework for creating new science gateways with minimal overhead. The CW is a web application that can be deployed on a modest server, and can be configured to submit command line instructions to any resource where the application has submission privileges. It is designed to be highly configurable / customizable, and supports GUI-based access to HPC resources through a web browser interface as well as programmatic access via a ReSTful API. Using browser access, the CW architecture creates an environment with secure user accounts where user input data, job results, and job provenance are stored. Using ReSTful access, it allows users with a registered a client application to deliver command lines to analytical codes and return of results from any compute resource. A development effort is underway to make allow the CW to submit jobs via the Science Gateways as a Platform (SciGaP) services hosted at Indiana University.


Tuesday July 28, 2015 3:30pm - 4:00pm CDT
Majestic B

4:00pm CDT

Performance Assessment of Real-time Estimation of Continuous-time Stochastic Volatility of Financial Data on GPUs
Realtime Bayes estimation of stochastic volatility for financial ultra-high frequency data is plagued with the curse of high dimensionality. Methods have been developed to manage this problem through the use a parallel computing, relying on supercomputering and GPU resources. In the technical paper, we endeavour to assess the performance of GPU computing and show that an adequately equipped GPU workstation can rise to the task, producing reasonably realtime results with actual data from financial markets.


Tuesday July 28, 2015 4:00pm - 4:30pm CDT
Majestic C

4:00pm CDT

Enabling HPC Simulation Workflows for Complex Industrial Flow Problems
The use of simulation based engineering taking advantage of massively parallel computing methods by industry is limited due to the costs associated with developing and using high performance computing software and systems. To address industries ability to effectively include large-scale parallel simulations in daily production use, two key areas need to be addressed. The first is access to large-scale parallel computing systems that are cost effective to use. The second is support for complete simulation workflow execution on these systems by industrial users. This paper presents an approach, and set of associated software components, that can support industrial users on large-scale parallel computing systems available at various national laboratories, universities, or on clouds.


Tuesday July 28, 2015 4:00pm - 4:30pm CDT
Majestic B

4:00pm CDT

BOF: SDSC's Comet - Cyberinfrastructure for the Long Tail of Science
NSF-funded computing centers have traditionally focused on the requirements of academic researchers with the most computationally demanding applications. But now that computational science is so pervasive, there is a need for infrastructure that can serve more researchers and disciplines than just those at the peak of the HPC pyramid. SDSC’s Comet system was specifically designed to address the needs of this much larger and more expansive science community – the “long tail of science”. The full system comprises nearly 2000 compute nodes, each containing two 12-core Intel Haswell processors capable of executing the AVX2 instruction set, 128 GB of DDR4 memory and two local 160 GB flash drives. The system also has 36 GPU nodes, containing two dual-GPU NVIDIA K80 cards, and later this year will deploy four 1.5 TB large memory nodes and provide high performance virtual clusters using single root I/O virtualization (SR-IOV).
 
After briefly describing the Comet architecture and its capabilities, we will seek feedback from the BoF participants that will help us to better serve the needs of the XSEDE community. Discussion topics may include, but are by no means limited to: identifying domains, users and applications that are well matched to Comet, suggested software installations, documentation and training to help users make effective use of Comet’s unique features, maximizing the usability of Comet by Science Gateway developers, complementing other XSEDE resources and assisting new users in quickly getting up to speed using the system.

Speakers
RS

Robert Sinkovits

Director Scientific Computing Applications, San Diego Supercomputer Center
I get excited about all things science and computing. Started off as a physicist, but have since collaborated with immunologists, ecologists, chemists, structural biologists, economists, astronomers, engineers, social scientists, mathematicians and others. Passionate about education... Read More →


Tuesday July 28, 2015 4:00pm - 5:00pm CDT
Majestic H

4:00pm CDT

BOF: STEM Activities for Kids K-12
Come share simple, cheap activities people can do with kids K-12 to get them interested in STEM. Preferred activities are ones that are "open-ended" and require the kids to do some engineering. Examples of such activities are Scribblebots (http://www.skunkscience.com/scribblebots/), Beetlebots (http://makezine.com/projects/make-12/beetlebots/), and DIY Speakers (https://www.youtube.com/watch?v=ytaiA1ptV-0). Where possible, attendees should bring an example of the activity for people to check out, activity instructions, and/or a parts list with estimated costs.


Tuesday July 28, 2015 4:00pm - 5:00pm CDT
Majestic A

5:30pm CDT

BOF: Drilling Down: Understanding User-Level Activity on Today's Supercomputers With XALT
Let’s talk real, no-kiddin’ supercomputer analytics, aimed at moving beyond monitoring the machine as a whole or even its individual hardware components. We’re interested in drilling down to the level of individual batch submissions, users, and binaries. And we’re not just targeting performance: we’re after ready answers to the "what, where, how, when and why" that stakeholders are clamoring for – everything from which libraries (or individual functions!) are in demand to preventing the problems that get in the way of successful science. This BoF will bring together those with experience and interest in present and future system tools and technologies that can provide this type of job-level insight, and will be the kickoff meeting for a new Special Interest Group (SIG) for those who want to explore this topic more deeply.


Tuesday July 28, 2015 5:30pm - 6:30pm CDT
Majestic B

5:30pm CDT

BOF: Enabling biology—especially genomics—with national cyberinfrastructures.
Biologists are increasingly becoming users of national computational resources, and NSF has taken steps to actively encourage this movement. This increase is driven and enabled by advances in genomics sciences, which has flourished from recent and rapid improvements in DNA sequencing technology, the so-called Next Generation Sequencers (NGS). Genomics sciences include everything from simultaneous genomic examination of complex bacterial communities, to medically motivated personal and cancer genomics. For many biologists, NGS has allowed them to obtain genomic and transcriptomic data for their non-model organisms.
 
However, this abundance of data has confronted biologists with big data and computational barriers; at the same time, national resources—such as those provided by XSEDE or the Open Science Grid (OSG)—are being under-utilized by biologists. Many XSEDE-provided resources are working diligently to support this community of scientists and their changing cyberinfrastructure needs.
 
This Birds of a Feather session welcomes all attendees of XSEDE ‘15 who are—or would like to—providing support to the biological sciences, especially in the realm of genomics. This session will be moderated by Drs. Thomas Doak, a genome scientist and manager of the National Center for Genome Analysis Support (NCGAS), and Philip D. Blood, Senior Computational Scientist, Pittsburgh Supercomputing Center. Both institutions provide XSEDE resources supporting a large community of genomics researchers.
 
IU participants will kick off the BOF by relaying the results from two recent surveys of NCGAS users and NSF-funded researchers. Topics for discussion will include bioinformatics software management and providing access through web-based workflow tools, but any topic is fair game. The session’s goal is to have a fun and open discussion of challenges and approaches to serving biologists with national cyberinfrastructures.


Tuesday July 28, 2015 5:30pm - 6:30pm CDT
Majestic C

5:30pm CDT

BOF: Integrating Adiabatic Quantum Computing and XSEDE to Enhance the Current State-of-the-art Discovery Environment
With the advancement in the computational systems environments promising exascale performance in the near future, the ability to solve large, multifarious scientific problems in various science and engineering fields, as well as society domains has increased. The NSF XSEDE program integrates the XD (eXtreme Digital) resources that are architecturally and geographically diverse, and provide computational capabilities and services to the XD users, thus facilitates a state-of-the-art discovery environment enabling the ability to solve large-scale computationally intensive problems. However, some scientific problems, such as, binary variable optimization problems may be NP (nondeterministic polynomial time), consequently the time required for a classical computer to perform an optimization grows much faster than any power law with the size of the system. The quantum adiabatic computers have unique capabilities compared to the classical computers in obtaining optimal solutions to NP problems, and in reducing the time to solution. In past decades, the use of quantum computers as a stable computing technology has been debatable. However, recently this perception seems to be changing with the obtainability of adiabatic quantum annealing with over one thousand qubits from D-Wave systems.
 
The intent of this Birds of a Feather session is to invoke a free and open exchange of experiences and ideas between sites, researchers, users, and developers that have already been involved with research using quantum computers and its requisite infrastructure and those that are considering doing so. There is no “right answer” to using quantum computing, and choice will certainly depend on many factors, such as, system size, existing research area, type of science and engineering problems, and availability of the access to the quantum computers. This BoF session will provide a forum for users of XSEDE as well as the developers that are involved in testing and validating the performance of D-Wave Quantum computers to discuss experiences and issues in using and porting the systems with respect to NP problems, such as, random spanning tree and machine learning problems.
 
This BoF will bring together a panel of experts from Mississippi State University and D-Wave Systems Inc, to survey and compare the start-of-the-art classical and quantum computing systems. Future plans for integrating quantum computers with XSEDE, and its potential benefits to the community at large will be discussed. We invite researchers, users, and developers to participate in this session to form collaborations, learn about, and influence the direction of this body of work.


Tuesday July 28, 2015 5:30pm - 6:30pm CDT
Majestic F

5:30pm CDT

BOF: Navigating XSEDE's Diverse Resource Portfolio
As the XSEDE CI Ecosystem has changed with the introduction of SDSC Comet and TACC Wrangler resource recently and the excitement of the new and unique resources, IU/TACC JetStream and PSC Bridges coming for 2016 users and service providers alike will be tasked with understanding the nuances of these resources with expectations of pushing the boundaries of the CI user community. This BOF will bring together allocations staff, key service providers and XSEDE users to discuss allocation policies and processes of XSEDE's increasingly diverse resource portfolio. The conversation will also provide a platform for the service provider representatives to explain the newest resources available via XSEDE along with guidelines for requesting and managing allocations.


Tuesday July 28, 2015 5:30pm - 6:30pm CDT
Majestic A
 
Wednesday, July 29
 

7:00am CDT

Breakfast sponsored by Corsa
Wednesday July 29, 2015 7:00am - 8:00am CDT
Landmark 1-7

7:00am CDT

Registration
Registration

Wednesday July 29, 2015 7:00am - 4:30pm CDT
Landmark Registration Office

8:00am CDT

Plenary: NSF Project Panel
NSF Project Plenary Panel will include representatives from the NSF and the PIs of recently funded projects: Craig Stewart (Jetstream), Nick Nystrom (Bridges), Michael Norman (Comet), Niall Gaffney (Wrangler), Jim Bottum (ACI-REF), Robert Ricci (CloudLab), and Kate Keahey (Chameleon).


Wednesday July 29, 2015 8:00am - 9:30am CDT
Majestic D&E

9:30am CDT

Break sponsored by Internet2
Wednesday July 29, 2015 9:30am - 10:00am CDT
Landmark 1-7

10:00am CDT

Accelerating Our Understanding of Supernova Explosion Mechanisms via Simulations and Visualizations with GenASiS
Core-collapse supernovae are among the most powerful explosions in universe, releasing about 1e53 erg of energy on timescales of a few tens of seconds. These explosion events are also responsible for the production and dissemination of most of the heavy elements, making life as we know it possible. Yet exactly how they work is still unresolved. One reason for this issue is the sheer complexity and cost of a self-consistent, multi-physics, and multi-dimensional numerical simulations for core-collapse supernova simulation, which is impractical, and often impossible, even on our largest supercomputers we have available. To advance our understanding we instead must often use simplified models, teasing out the most important ingredients for successful explosions, while helping us to interpret results from higher fidelity multi-physics models. In this paper we investigate the role of instabilities in supernova environment. We present here simulation and visualization results produced by our code GenASiS.


Wednesday July 29, 2015 10:00am - 10:30am CDT
Majestic C

10:00am CDT

In-Core Volume Rendering for Cartesian Grid Fluid Dynamics Simulations
The volume rendering code Srend is designed for visualization of computational fluid dynamics simulations which use Cartesian grids, and it is designed to be compiled within the application for in-core rendering. Srend was embedded in three codes: Piecewise Parabolic Method (PPMstar), Cloud Model 1 (CM1), and Weather Research Forecast model (WRF). Results show modest rendering overhead, fine quality imagery, and high potential for scaling. When embedded in a code, Srend produces immediate, quality imagery that can sharply reduce data output & storage requirements.


Wednesday July 29, 2015 10:00am - 10:30am CDT
Majestic B

10:00am CDT

Jetstream: A self-provisioned, scalable science and engineering cloud environment
Recent National Science Foundation (NSF) grant awards are increasing the diversity of advanced cyberinfrastructure resources, and the national research community XD ecosystem (that is, the Extreme Science and Engineering Discovery Environment (XSEDE) and the service provider resources supported in part by XSEDE) is becoming more diverse. In this paper, we describe Jetstream, the first cloud resource funded by the NSF to provide general purpose, cloud computing resources across all science and engineering domains supported by XSEDE.


Wednesday July 29, 2015 10:00am - 10:30am CDT
Majestic F

10:00am CDT

Allinea Embedded Tutorial: Debugging and Optimizing HPC Codes
In this presentation we will introduce Allinea Forge and integrated environment for debugging and optimizing parallel codes. We will provide hands-on demonstrations of how our products reduce development time, simplify debugging, and ease application performance enhancement. Over 50% of development time can be spent on defect corrections and application optimization. Adopting the Allinea Unified Environment will greatly reduce this time saving you and your organization time and money.
 
Allinea DDT: Using sample codes, we will walk through the major capabilities of the GUI to illustrate how DDT can debug applications ranging from a single thread to large scale codes.
• How to use sparklines to visualize variable values across processes
• Using semantic analysis tools to catch bugs before you even run the code
•Illustrate memory debugging to trap array out of bounds errors and memory leaks
•Using the array viewer to visualize multi-dimensional variables
•Offline debugging
•Using watchpoints to stop on a variable change
•Tracepoints as a flexible printf alternative
 
Allinea MAP: We will illustrate how in a matter of minutes you can understand the nature of your application’s performance through visualization
• How to check MPI and memory usage
• Determine how well a code is vectorized
• See how the amount of time spent in memory operations varies over time and processes
• How to zoom in to a part of the timeline and isolate the performance behavior of an isolated piece of code
 
Performance Reports: We will present some sample Performance Reports to illustrate how the tool benefits a broad range of users, is immediately understandable and guides to a solution.

Speakers

Wednesday July 29, 2015 10:00am - 12:00pm CDT
Majestic G

10:00am CDT

iRODS Embedded Tutorial: An Introduction to iRODS Data Management Software
iRODS is open source data management software, providing a virtualization layer that spans storage devices, a metadata catalog, a rule engine that can be programmed to execute any action upon any trigger, and federated access between independently managed deployments. It has been used at the multi-petabyte scale to implement sophisticated preservation and access control policies, with auditing; extract metadata from files to make the metadata searchable; track data processing steps from raw inputs to finished products; and unify data management and access across local and cloud-based file systems and object stores.
 
This tutorial will be a hands-on workshop where attendees will learn how to: use iRODS administrative commands, set up automatic replication between a local file system and a cloud-based object storage system, use an iRODS rule to automate file format transformation, explore the iRODS metadata catalog, and transfer data to and from iRODS.

Speakers

Wednesday July 29, 2015 10:00am - 12:00pm CDT
Majestic H

10:30am CDT

Paleoscape Model of Coastal South Africa During Modern Human Origins: Progress in Scaling and Coupling Climate, Vegetation, and Agent-based Models on XSEDE
To better understand the origins of modern humans, we are developing a paleoscape model that simulates the climatic conditions and distribution of natural resources available to humans during this critical stage of human evolution. Our geographic focus is the southern Cape region of South Africa, which was rich in natural resources for hunter-gatherer groups including edible plants, shellfish, animals, and raw materials. In this article we report our progress in using the Extreme Science and Engineering Discovery Environment (XSEDE) to realize the paleoscape model, which consists of four models: a climate model, correlative and dynamic vegetation models, and agent-based models. We adopt a workflow-based approach that combines modeling and data analytics to couple these modeling components, which will leverage multiple XSEDE resources to generate and analyze multi-terabyte datasets. We have made significant progress in scaling climate and agent-based models on XSEDE. Our next steps will be to couple these models to the vegetation models to complete the workflow, which will require overcoming multiple theoretical, methodological, and technical challenges.


Wednesday July 29, 2015 10:30am - 11:00am CDT
Majestic C

10:30am CDT

Autotuning OpenACC Work Distribution via Direct Search
OpenACC provides a high-productivity API for programming GPUs and similar accelerator devices. One of the last steps in tuning OpenACC programs is selecting values for the num_gangs and vector_length clauses, which control how a parallel workload is distributed to an accelerator's processing units. In this paper, we present an autotuner that can assist the programmer in selecting high-quality values for these parameters, and we evaluate the effectiveness of two direct search methods in finding solutions. We compared num_gangs and vector_length values found by our autotuner to those found by a bounded exhaustive search. On a suite of 36 OpenACC kernels, one or both of our autotuner's direct search methods identified values within the top 5% for 29 of the kernels; all results were within the top 30%. The majority of the kernels achieved a statistically significant speedup over the compiler's defaults, and the autotuner required only 7-11 runs of the target program, on average.


Wednesday July 29, 2015 10:30am - 11:00am CDT
Majestic B

10:30am CDT

Bridges: A Uniquely Flexible HPC Resource for New Communities and Data Analytics
In this paper, we describe Bridges, a new HPC resource that will integrate advanced memory technologies with a uniquely flexible, user-focused, data-centric environment to empower new research communities, bring desktop convenience to HPC, connect to campuses, and drive complex workflows. Bridges will differ from traditional HPC systems and support new communities through extensive interactivity, gateways (convenient web interfaces that hide complex functionality and ease access to HPC resources) and tools for gateway building, persistent databases and web servers, high-productivity programming languages, and virtualization. Bridges will feature three tiers of processing nodes having 128GB, 3TB, and 12TB of hardware-enabled coherent shared memory per node to support memory-intensive applications and ease of use, together with persistent database and web nodes and nodes for logins, data transfer, and system management. State-of-the-art Intel® Xeon® CPUs and NVIDIA Tesla GPUs will power Bridges’ compute nodes. Multiple filesystems will provide optimal handling for different data needs: a high-performance, parallel, shared filesystem, node-local filesystems, and memory filesystems. Bridges’ nodes and parallel filesystem will be interconnected by the Intel Omni-Path Fabric, configured in a topology developed by PSC to be optimal for the anticipated data-centric workload. Bridges will be a resource on XSEDE, the NSF Extreme Science and Engineering Discovery Environment, and will interoperate with other advanced cyberinfrastructure resources. Through a pilot project with Temple University, Bridges will develop infrastructure and processes for campus bridging, i.e. offloading jobs at periods of unusually high load to the other site and facilitating cross-site data management. Education, training, and outreach activities will raise awareness of Bridges and data-intensive science across K-12 and university communities, industry, and the general public.


Wednesday July 29, 2015 10:30am - 11:00am CDT
Majestic F

11:00am CDT

Inversion of magnetotelluric data using integral equation approach with variable sensitivity domain: application to EarthScope MT data
We have developed a multi-level parallel magnetotelluric (MT) integral equation based inversion program which uses variable sensitivity domain. The limited sensitivity of the data, which decreases with increasing frequency, is exploited by a receiver sensitivity domain, which also varies with frequency. We assess the eect of inverting principal, full impedance tensor, and full tensor with magnetovariational data (tipper). We rst apply this method to several models and then invert the EarthScope MT data. We recover well the prominent features in the area including resistive structure associated with the Juan de Fuca slab subducting beneath the northwestern United States, the conductive zone of partially melted material above the subducting slab at the Cascade volcanic arc, conductive features in the Great Basin resulting from prior subduction and in the area of Yellowstone associated with the hot spot, and resistive areas to the east corresponding to the older and more stable cratons. Our results agree well with other publications which furthers the condence in the inversion of the EarthScope MT project.


Wednesday July 29, 2015 11:00am - 11:30am CDT
Majestic C

11:00am CDT

Porting Scientific Libraries to PGAS in XSEDE Resources: Practice and Experience
The next generation of supercomputers present new and complex challenges that might require a change in the current paradigm of how parallel applications are developed. Hybrid programming is usually described as the best approach for exascale computers. PGAS programming models are considered an interesting alternative to work together with MPI in this hybrid model to achieve good performance in those machines. This is a very promising approach specially for one-sided and irregular communication patterns. However, this is still an emerging technology and there is not much previous experience on how to port existing MPI applications to the PGAS model. Due to the promising relevance of this approach for the next generation of devices, it is relevant to have early experience on porting applications as well as knowledge on the issues that might be faced in this new paradigm. In this paper we present two different scientific applications that are currently implemented in MPI and that are promising candidates for this PGAS paradigm. We describe how these applications have been ported, the challenges faced and some of the solutions that we found. We also show how PGAS models can achieve great performance when compared to MPI.


Wednesday July 29, 2015 11:00am - 11:30am CDT
Majestic B

11:00am CDT

XSEDE value added, cost avoidance, and return on investment
In science and engineering, discovery often lags behind research by many years, and the full potential of the research may not be known for decades. Thus large research facilities struggle to quantify a return on the substantial federal investments that fund their often broad scopes of operations. This paper presents a methodology in three parts that illustrates the value from the investment in XSEDE: 1) a qualitative examination of the primary areas where XSEDE adds value to the open research community, 2) a “thought model” examining the savings realized by the NSF through the centralization and coordination XSEDE provides, and 3) an assessment of the value provided to Service Providers within the XSEDE ecosystem. XSEDE adds significantly to the national research community because it functions as one unified interface to the XD ecosystem and because of its scale. It is not possible to put a financial value on this value yet, but anecdotal evidence indicates that this is significant – perhaps greater than the entire cost of XSEDE over five years. It is possible to qualitatively assess the return on investment for XSEDE’s delivery of essential core functions for a national advanced cyberinfrastructure center. An analysis that is partly quantitative, partly qualitative, suggests that the ROI for XSEDE’s delivery of essential core functions to the nation is greater than 1.0. Qualitative arguments considering anecdotal evidence of XSEDE’s value added suggest that XSEDE has an ROI of greater than 1.0 in terms of return to the nation on the NSF investment in XSEDE.


Wednesday July 29, 2015 11:00am - 11:30am CDT
Majestic F

11:30am CDT

Exploring the earliest galaxies with Blue Waters and the James Webb Space Telescope
Galaxies are complicated beasts - many physical processes operate simultaneously, and over a huge range of scales in space and time.  As a result, accurately modeling the formation and evolution of galaxies over the lifetime of the universe presents tremendous technical challenges - challenges that require a computer like Blue Waters to adequately resolve.  In this talk I will describe some of the important unanswered questions about galaxy formation and present simulations, performed on Blue Waters, of a large population of galaxies that formed in the earliest ages of the Universe. These galaxies will be detectable by the James Webb Space Telescope when it is launched in a few years, and will provide critical clues about the formation of the first supermassive black holes, the first generations of metal-enriched stars, and the topology of reionization and its effects on later generation of galaxy formation.


Wednesday July 29, 2015 11:30am - 12:00pm CDT
Majestic C

11:30am CDT

A Performance Predictor for UltraScan Supercomputer Calculations
A performance prediction model for the two-dimensional spectrum analysis algorithm of the UltraScan software was developed to aid in the prediction of calculation times on XSEDE supercomputer infrastructure. The efficiency profiles for various analysis types and analysis parameter combinations when used on Lonestar, Trestles and Stampede were determined by mining performance data from past analyses stored in the UltraScan LIMS database. The resulting model was validated against an analytical performance model. The model can be integrated into the existing UltraScan submission infrastructure to provide improved wall time estimates for the XSEDE supercomputer clusters to increase queuing efficiency.


Wednesday July 29, 2015 11:30am - 12:00pm CDT
Majestic B

12:00pm CDT

Lunch sponsored by Intel
Wednesday July 29, 2015 12:00pm - 1:00pm CDT
Landmark 1-7

12:00pm CDT

Student and Mentor Lunch
Wednesday July 29, 2015 12:00pm - 1:00pm CDT
Landmark 1-7

1:00pm CDT

Campus Champions: Fellow Presentations & Discussion Panel
The 2014 Campus Champion Fellows, Hadrian Djohari, Justin Oelgoetz, Brian Couger, and Neranjan Edirisinghe, will present lightning talks on their fellowship projects and receive their participation awards.
 
The plenary panel discussion this year is chaired by Ruth Marinshaw, and the members are Don Liu, James Cuff, Henry Neeman, and Jack Smith. They will address the question: Why are Campus Champions Relevant?


Wednesday July 29, 2015 1:00pm - 3:00pm CDT
Majestic D&E

3:00pm CDT

Break sponsored by Omnibond
Wednesday July 29, 2015 3:00pm - 3:30pm CDT
Landmark 1-7

3:30pm CDT

A System for Load Balancing of Local Minimization and Energy Calculations in Crystal Structure Prediction Simulations
Crystal structure prediction (CSP) is an area of increasing scientific importance [1,2]. The development of effective computational methods for CSP would potentially lead to an increase in the understanding of crystal growth and structural analyses. In a more applied sense, CSP has importance in the pharmaceutical industry for detecting crystal polymorphs and formulating co-crystals to improve bioavailability of drugs. Over the past ten years great strides have been made showing that crystal structure prediction is viable and practical. Especially promising are recent advances using dispersion corrected density functional theory (DFT-D) in obtaining accurate predictions [3].
 
At the heart of CSP methods there are two fundamental tasks. The first task is the energetic ranking of structures. In most cases a molecule will crystallize in an energetically stable form, consequently, the accurate assessment of lattice energies is essential in crystal structure prediction. Of particular importance is the balance between intra- and inter-molecular forces; correctly representing this interplay is critical. The second task is efficiently exploring the global energy hypersurface, as the energy landscape of an arbitrary molecular crystal can be quite complex, containing many local energy potential wells. Properly searching this surface requires a thorough but efficient search algorithm to find the global minimum as well as other local minima in close energetic proximity, which may exist in the case of polymorphism.
 
The Modified Genetic Algorithm for Clusters and Crystals (MGAC) [4–6] is a parallel distributed, multithreaded application for crystal structure prediction of small organic molecules. The algorithm uses population based natural selection to perform the global search of the energy hypersurface. Most recently we have shown that DFT-D is effective when applied as the sole energy ranking method for MGAC [7]. By integrating the Quantum Espresso software suite [8] with MGAC we have been able to correctly predict the three atmospheric pressure polymorphs of glycine [3] and the single crystal form of histamine (unpublished results), when performing searches in the native space groups. The primary drawback of using DFT-D for CSP is the substantial computational time required. When compared with other commonly used methods for energetic ranking in CSP (such as molecular mechanics), DFT-D is computationally more expensive by a factor of at least 1000 when performing local optimization.
 
In light of this fact, to perform blind predictions of crystal structures of molecules of pharmaceutical interest a significant scale up of the resources that MGAC can efficiently utilize will be required. The current version of MGAC relies on a server/client method which does not scale well for large node counts (greater than 100), nor does it allow for on the fly workload redistribution. Relevant to this is the inefficiency of the scheduling: when scaling to large numbers of nodes MGAC will often leave nodes idle for extended periods of time, leading to efficiency as low as 50% in some cases. MGAC is also highly susceptible to file system instabilities, and does not have a robust mechanism for dealing with node failures or sudden job termination. To mitigate these factors we have begun the development of MGAC2, which is projected to be functional and in production by mid-April, 2015. By mid-May we expect to have validated MGAC2 against glycine and histamine, results that we expect will be presented at this conference.
 
As part of the development of MGAC2 we will be relying heavily on XSEDE to provide the resources required to perform blind CSP searches. We intend to show that blind CSP searches using DFT-D are both tenable and effective in finding crystal structures of pharmaceutical interest. We also intend to show that large scale simulations of population based systems can be scaled effectively to large numbers of cores, in a robust way that maximizes resources. Finally, we will present benchmarks for various molecules of pharmaceutical interest, as well preliminary results for the upcoming sixth blind test hosted by the Cambridge Crystallographic Data Centre.
 
[1] G.M. Day, T.G. Cooper, A.J. Cruz-Cabeza, K.E. Hejczyk, H.L. Ammon, S.X.M. Boerrigter, et al., Significant progress in predicting the crystal structures of small organic molecules - a report on the fourth blind test, Acta Crystallogr. Sect. B-Structural Sci. 65 (2009) 107–125. [2] Bardwell DA, Adjiman CS, Ammon HL, Arnautova YA, Bartashevich E, Boerrigter SXM, et al., Towards crystal structure prediction of complex organic molecules - a report on the fifth blind test, Acta Cryst. B67 (2011). [3] A.M. Lund, G.I. Pagola, A.M. Orendt, M.B. Ferraro, J.C. Facelli, Crystal structure prediction from first principles: The crystal structures of glycine, Chem. Phys. Lett. 626 (2015) 20–24. [4] S. Kim, A.M. Orendt, M.B. Ferraro, J.C. Facelli, Crystal Structure Prediction of Flexible Molecules Using Parallel Geneic Algorithms with Standard Force Field, J. Comp. Chem. 30 (2009) 1973–1985. [5] V.E. Bazterra, M.B. Ferraro, J.C. Facelli, Modified genetic algorithm to model crystal structures. I. Benzene, naphthalene and anthracene, J. Chem. Phys. 116 (2002) 5984–5991. [6] V.E. Bazterra, M.B. Ferraro, J.C. Facelli, Modified genetic algorithm to model crystal structures. II. Determination of a polymorphic structure of benzene using enthalpy minimization, J. Chem. Phys. 116 (2002) 5992–5995. [7] A.M. Lund, A.M. Orendt, G.I. Pagola, M.B. Ferraro, J.C. Facelli, Optimization of Crystal Structures of Archetypical Pharmaceutical Compounds: A Plane-Wave DFT-D Study Using Quantum Espresso, Cryst. Growth Des. 13 (2013) 2181–2189. [8] P. Giannozzi, S. Baroni, N. Bonini, M. Calandra, R. Car, C. Cavazzoni, et al., QUANTUM ESPRESSO: a modular and open-source software project for quantum simulations of materials., J. Phys. Condens. Matter. 21 (2009) 395502.


Wednesday July 29, 2015 3:30pm - 4:00pm CDT
Majestic C

3:30pm CDT

Using Mozilla Badges to Certify XSEDE Users and Promote Training
Training is becoming more relevant in High Performance Computing (HPC) as the community which utilizes it grows outside of the traditional hard sciences. Moreover, with new technologies coming online every year, even those with backgrounds in HPC must keep up-to-date with the newest methodologies. Most importantly, users want to be able to show that not only did they attend a training but also that they learned a specific set of skills that can be vouched for. In order to make this possible, XSEDE has begun a Moodle site for users to interact during training events as well as be able to earn beginner, intermediate, or advanced Mozilla badges to show competency. The manner in which standards and assessments are created, as well as preliminary results from our pilot instance of Mozilla badging at an OpenACC training event will be discussed in this work.


Wednesday July 29, 2015 3:30pm - 4:00pm CDT
Majestic A

3:30pm CDT

Industry Challenge: Molecular Simulations for Virtual Formulation of Consumer Products
Procter and Gamble uses modeling and simulation to develop consumer products. High performance computing has been a critically enabling factor. The engagement in the area of formulation and materials science is a rapidly maturing area of engagement.
 
Development of new consumer products includes new technologies with improved cleaning performance, better sustainability through using less materials, and a delighting experience during use. Molecular simulations of multi-component and surfactant mixtures have become an important tool for designing consumer products. Studying interactions at the levels of molecules provides a virtual microscope operating at the length scale of molecules (billionths of meters) to understand how new ingredients or formula compositions change the property and performance of consumer products.
 
This presentation will give examples from the XSEDE industrial challenge program and other examples of how molecular simulations can be used to research the physics behind common consumer products.

Speakers

Wednesday July 29, 2015 3:30pm - 4:00pm CDT
Majestic B

3:30pm CDT

Overview of XSEDE-PRACE Collaborative Projects in 2014
In this paper we give a brief overview of the three projects that were chosen for XSEDE-PRACE collaboration in 2014. We begin this paper with an introduction of the XSEDE and PRACE organizations and the motivation for a collaborative effort between these two organizations. We then talk about the three projects that were involved in this collaboration. We provide an overview of the projects themselves and what was in scope for this collaboration. We also outline the hurdles and issues faced by the projects during this unique collaborative effort and also discuss the benefits the projects derived from this collaboration . We finally outline the future steps envisioned for XSEDE-PRACE collaborative efforts going forward.


Wednesday July 29, 2015 3:30pm - 4:00pm CDT
Majestic F

4:00pm CDT

Distributed Memory Parallel Multigrid Solver for Growth and Self-Assembly of Crystalline Nanostructures
Reproducing synthesizing and assembling arrays of nanostructures with controlled morphologies and compositions is very important in the study of nanostructured materials, which are basis of many breakthrough technologies. PFC-based models, which allow larger domains to be simulated comparing with the traditional approach, are widely used in this field. However, complicated systems involving gradient, nonlinearity and convolution presented in the governing equation pose the challenge to current state-of-the-art numerical and computational methods. Unconditional stable numerical algorithms and a highly efficient Fortran solver has been developed to address this problem. The solver is a combination of full approximation scheme (FAS) geometric multigrid solver and discrete Fourier transform (DFT) module using FFTW. The former is used to handle the nonlinear problem, and the later is used to calculate the convolution. In two-dimensional simulation, this solver is proved to be very successful. In the extension to three dimensional simulation, however, the scale for the degree of freedom we are interested in reaches trillion and higher. Thus high performance technique needs to be employed to accelerate the current solver. We employ the distributed memory parallelization on both modules through message passing interface (MPI). The multigrid module is paralleled by domain decomposition in all three dimensions (cube) with halo region. The DFT module take the advantage of the MPI interface of FFTW library, which also use the domain decomposition, but only for one dimension (slab). The new solver is installed on STAMPEDE and tested extensively. We studied the strong and weak scalability for both multigrid and FFTW modules, and the strong and weak scalability for the full solver. To our best knowledge, this is the first result in such field. Our results indicates that scalabilities for both multigrid and FFTW modules are both linear, indicating the strong potential for the solver. On the other hand, due to the difference between the domain decomposition dimensions, the scalability of the full solver is slightly less than optimal. Furthermore, this difference introduced an uneven memory distribution between worker processors, which needs to be treated properly.


Wednesday July 29, 2015 4:00pm - 4:30pm CDT
Majestic C

4:00pm CDT

TAS View of XSEDE Users and Usage
The Technology Audit Service has developed, XDMoD, a resource management tool. This paper utilizes XDMoD and the XDMoD data warehouse that it draws from to provide a broad overview of several aspects of XSEDE users and their usage. Some important trends include: 1) in spite of a large yearly turnover, there is a core of users persisting over many years, 2) user job submission has changed from primarily faculty members to students and postdocs, 3) increases in usage in Molecular Biosciences and Materials Research has outstripped that of other fields of science, 4) the distribution of user external funding is bimodal with one group having a large ratio of external funding to internal XSEDE funding (ie, CPU cycles) and a second group having a small ratio of external to internal (CPU cycle) funding, 5) user job efficiency is also bimodal with a group of presumably new users running mainly small inefficient jobs and another group of users running larger more efficient jobs, 6) finally, based on an analysis of citations of published papers, the scientific impact of XSEDE coupled with the service providers is demonstrated in the statistically significant advantage it provides to the research of its users.


Wednesday July 29, 2015 4:00pm - 4:30pm CDT
Majestic A

4:00pm CDT

Industry Challenge: Enabling HPC Simulation Workflows for Complex Industrial Flow Problems
The use of simulation based engineering taking advantage of massively parallel computing methods by industry is limited due to the costs associated with developing and using high performance computing software and systems. To address industries ability to effectively include large-scale parallel simulations in daily production use, two key areas need to be addressed. The first is access to large-scale parallel computing systems that are cost effective to use. The second is support for complete simulation workflow execution on these systems by industrial users. This paper presents an approach, and set of associated software components, that can support industrial users on large-scale parallel computing systems available at various national laboratories, universities, or on clouds.


Wednesday July 29, 2015 4:00pm - 4:30pm CDT
Majestic B

4:00pm CDT

Optimizing Codes on the Xeon Phi: A Case-study with LAMMPS
Abstract—Intel’s Xeon Phi co-processor has the potential to provide an impressive 4 GFlops/Watt while promising users that they need only to recompile their code to get it to run on the accelerator. This paper reports our experience on running LAMMPS, a widely-used molecular dynamics code, on the Xeon Phi and the steps we took to optimize its performance on the device. Using performance analysis tools to pinpoint bottlenecks in the code, we were able to achieve a speedup of 2.8x from running the original code on the host processors vs. the optimized code on the Xeon Phi. These optimizations also resulted in an improved LAMMPS’ performance on the host – speeding up the execution by 7x.


Wednesday July 29, 2015 4:00pm - 4:30pm CDT
Majestic F

4:30pm CDT

A SIMD tabu search implementation for solving the quadratic assignment problem with GPU acceleration
In the Quadratic Assignment Problem (QAP), n units (usually departments , machines, or electronic components) must be asigned to n locations given the distance between the locations and the flow between the units. The goal is to find the assignment that minimizes the sum of the products of distance traveled and flow between units. The QAP is a combinatorial problem difficult to solve to optimality even for problems where n is relatively small (e.g., n = 30). In this paper, we develop a parallel tabu search algorithm to solve the QAP and leverage the compute capabilities of current GPUs.
 
The algorithm is implemented on the Stampede cluster hosted by the Texas Advanced Computing Center (TACC) at the University of Texas at Austin. We enhance our implementation by exploiting dynamic parallelism made available in the Nvidia Kepler high performance computing architecture. On a series of experiments on the well-known QAPLIB data sets, our algorithm produces solutions that are as good as the best known ones posted in the QAPLIB. The worst case percentage of accuracy we obtained was 0.83%. Given the applicability of QAP, the algorithm we propose has very good potential to accelerate scholarly research in Engineering, particularly in the fields of Operations Research and design of electronic devices. To the best of our knowledge, this work is the first one to successfully parallelize the Tabu Search metaheuristic with the recency-based feature implemented serially by Chiang & Kouvelis (1996). Our work is also the first to exploit GPU dynamic parallelism in a Tabu Search metaheuristic to solve the QAP.


Wednesday July 29, 2015 4:30pm - 5:00pm CDT
Majestic C

4:30pm CDT

Of REFs, Regions, and Research: A One-Year Review of our National Cyberinfrastructure Continuum
Emerging technologies, such as high resolution electron microscopy and ever more advanced telescopes and next generation sequencing, coupled with a ‘big data’ frenzy, are driving the requirement of increased computing capability in the sciences. Today, researchers talk about an array of advanced digital resources and services including both technologies as well as people: local/remote computing systems, instruments, datasets, and visualization/collaboration environments; as well as IT support staff, trainers & educators, and students, the future leaders and innovators. The National Science Foundation (NSF) is funding a number of initiatives to understand the most effective strategies for helping campus individuals to make the most informed choices from the vast array of resources and services available to them such as XSEDE, OSG, ACI-REF, Blue Waters, NCAR, and others. The projects are capturing lessons learned and strategies for best providing access to the most appropriate local, regional, and national resources and services that are needed.

 

Transitioning from department or individual computing resources to Advanced CyberInfrastructure (ACI) such as campus, regional or national resources is a non-trivial step for many to take. Research facilitation practices, such as those with the NSF-funded ACI-REF program (Advanced Cyberinfrastructure – Research and Education Facilitator) and our community building via programs like the XSEDE Champions, are poised to have a significant impact on this critical shift to support essential ACI activities, infusing campus level “representatives” with the knowledge to guide their researchers in proper services and workflow decisions. As campus computing has increased, including the requirement for a new, knowledgeable workforce both locally & nationally, the NSF projects have formed a national cadre of knowledgeable people able to assist each other in supporting their local community to develop this ‘on ramp’ and to supplement the small, local support staff.

 

For example, at a recent XSEDE Region 7 meeting at the MGHPCC in Holyoke, MA, more than 80% of the Campus Champions in the region met to develop their mission as a Region, including the importance and problems of training, unified software infrastructure, and the value of being colleagues. As a positive, in one year, the 15 ACI-REFs have held 74 training sessions on ACI resources with 1086 attendees, and 809 consults; and the 253 XSEDE Campus Champions have held 817 outreach/training events, reaching over 8,027 (potential) users. But there’s much to do: from a 6-month new-cluster user survey at Harvard, 59% do not feel comfortable using clusters, 51% have little to no Unix experience, 65% do not have a “technical support” person in the lab, and 57% are from non-traditional ACI sciences (Biology + Social Sciences). Preliminary evidence here suggests that training, the increased prevalence of documentation, and local, knowledgeable facilitators have resulted in increased sophistication of help requests, increased usage of the full range of digital resources, enhanced curriculum incorporating these new methods, and quicker “time to science” for many research groups. With the extensive catalog of XSEDE trainings and utilizing the resources of organizations like Software/Data Carpentry and 3rd party vendors, ACI-REFs and Champions are able to be agile in supporting their campus’ specific needs. In total, these interactions and training experiences provide a direct, key advantage for the researcher when applying for grants and research positions.

 

We will discuss the educational landscape and cyberinfrastructure continuum and the role of the ACI-REF, our often combined roles as XSEDE Campus Champions and Instructors, and statistics derived from our efforts, including specific examples of our programs and the impact on research at the participating campuses.

 

Intended Audience: Research Directors and Leadership, CIOs, Service Providers, ACI-REFs, Champions, Scientists, Educators, and Researchers

 

Key Message: “Cyberinfrastructure is a continuum of people, resources, technologies, and services. We describe how the community works together across local, regional, national, and even international levels to assist the community in making the most effective use of cyberinfrastructure to advance discovery in all fields of study.”

Speakers
avatar for Scott Lathrop

Scott Lathrop

Blue Waters Technical Program Manager Education, Outreach and Training, Shodor


Wednesday July 29, 2015 4:30pm - 5:00pm CDT
Majestic A

4:30pm CDT

A Prototype Sampling Interface for PAPI
PAPI is a widely used portable library for accessing hardware counters on modern microprocessors. PAPI offers both counting and sampling interfaces, but the sampling interface is extremely limited, consisting of a simple interrupt-driven interface that can periodically report processor state. In the past few years, the hardware and operating systems of modern processors have added support for new more advanced sampling features. These features enable information about non-uniform memory access (NUMA) behavior to be obtained. Currently, performance tool developers who want to provide sampling data to their users must make use of a complex low-level kernel interface, sometimes developing their own kernel patch to access the features they need. This paper reports on initial efforts to develop a middleware layer that will serve as a stable interface and enable tool developers to access sampling data through standard PAPI calls and to obtain data important for NUMA analysis.


Wednesday July 29, 2015 4:30pm - 5:00pm CDT
Majestic F

5:00pm CDT

Poster Session
Wednesday July 29, 2015 5:00pm - 7:00pm CDT
Majestic Foyer

5:00pm CDT

Poster Session
Wednesday July 29, 2015 5:00pm - 7:00pm CDT
Majestic Foyer

5:00pm CDT

Visualization Showcase
Wednesday July 29, 2015 5:00pm - 7:00pm CDT
Majestic Foyer

6:30pm CDT

Campus Champion Reception and Dinner
By invitation only

Wednesday July 29, 2015 6:30pm - 9:00pm CDT
Crystal Ballroom

7:00pm CDT

Student Speed Networking
Graduate Students will prepare a 5 minutes presentation of their research. Students may use their laptop if necessary. We will place 1-2 grad students and 5-8 undergraduate students per table for 15-20 minutes in order to give the 5 minutes presentations, and the rest of the time is for questions and answers from the undergraduate students seating at the table. The questions may be research related or professional like how you got interested on pursuing graduate school, what have been the major challenges, etc. After the 15-20 minutes presentation is up, the graduate student(s) will rotate tables. We expect to have at least 3 rotations.

Wednesday July 29, 2015 7:00pm - 8:00pm CDT
Statler Ballroom
 
Thursday, July 30
 

7:00am CDT

Breakfast sponsored by DDN
Thursday July 30, 2015 7:00am - 8:00am CDT
Landmark 1-7

7:00am CDT

Registration
Registration

Thursday July 30, 2015 7:00am - 12:00pm CDT
Landmark Registration Office

8:30am CDT

Plenary: Building Effective Teams in Extreme Science and Engineering Environments
Our ability to solve complex problems often require the integration of knowledge from multiple disciplines and the establishment of teams that involve individuals from the private and public sectors. In support of these environments, XSEDE provides critical services that support the use of high-performance computing, the storage and access of data, and the ability to operate as virtual organizations. To complement these services and enhance the effectiveness of novice researchers and others in such settings, professional development in the areas of knowledge integration, team building, and inclusion of perspectives is imperative. A recent National Research Council report noted that “team science can lead to results with greater scientific impact, innovation, productivity, and reach than single-investigator approaches.” This talk introduces the NSF CREST-funded CyberShARE Center of Excellence and its efforts on building diverse, interdisciplinary teams. In particular, the talk will describe the basis and practice of team work, present models of interdisciplinary team work, and discuss the community’s role in supporting diversity.

Speakers

Thursday July 30, 2015 8:30am - 9:30am CDT
Majestic D&E

9:30am CDT

Break
Thursday July 30, 2015 9:30am - 10:00am CDT
Landmark 1-7

10:00am CDT

Discovering the Influence of Socioeconomic Factors On Online Game Behaviors
With the rapid growth of popularity of online games in recent years, research has increased the knowledge about game playing behaviors and some demographic correlates, such as age and gender. Contributing to this line of research, we further examined the associations of socioeconomic factors (education, employment, income, and poverty status) and demographic characteristics (age, gender, and race/ethnicity) with online gaming behaviors (player count, total game sessions, total actions taken, and maximum level reached). We focused on one form of online games, Massively Multiplayer Online Games (MMOGs), which has received much attention recently. Analyses were conducted by extracting and linking the related data from the EverQuest II game logs, a popular form of MMOGs, and the summarized U.S. Census data for the zip-code areas of players from the state of Illinois (n=495). This study was performed on Gordon, a supercomputer hosted at San Diego Supercomputer center.
 
Results from multiple regression analyses indicated positive associations between education rate (Bachelor’s degree or above) and employment rate (for 16 years or older) at the zip-code level with game playing behaviors. The impact of median income on gaming behaviors appeared to operate through education and employment. Although male players outnumbered female players, female players tended to play more games sessions and take more actions. Results also indicated that a higher percentage of white population in a geography area seemed to be negatively related to game playing activities. These study findings provide valuable information and highlight the importance of further research and theoretical development in this area.


Thursday July 30, 2015 10:00am - 10:30am CDT
Majestic C

10:00am CDT

Connecting the Non-Traditional User-Community to the National CyberInfrastructure
This paper reports on a hands-on workshop that was organized to promote the usage of the National CyberInfrastructure (NCI) amongst non-traditional High Performance Computing (HPC) users. The majority of the workshop participants were students and professionals who had never used the NCI before but were interested in leveraging it for conducting computationally intensive Big Data management activities. With the support from the National Science Foundation (NSF), students from underrepresented groups were also funded to participate in the workshop, where they learnt about both the NCI and Big Data management. The workshop itself was an outcome of an XSEDE Extended Collaborative Support Service (ECSS) project that involved non-traditional HPC users from the archaeology domain.


Thursday July 30, 2015 10:00am - 10:30am CDT
Majestic A

10:00am CDT

GeoHashViz: Interactive Analytics for Mapping Spatiotemporal Diffusion of Twitter Hashtags
Twitter serves as multi-purpose social media which attracts hundreds of millions of users to share their daily activity logs. With ever-increasing use of GPS-enabled devices, these activities can be linked to the approximate location of the user, hence providing a rich nearly real-time geospatial dataset. Recent studies have suggested that the Twitter hashtags can be used as a medium to study idea diffusion, due to their popularity and concise structure. However, to our knowledge, there is no available interactive framework which can provide in-depth analytics on spatiotemporal diffusion of hashtags. Our framework, GeoHashViz, provides an scalable approach to extract metrics related to geospatial diffusion of hashtags, and visualize them to the user in a highly interactive web-based application. We take advantage of Hadoop to calculate the aggregated metrics related to both hashtags and locations for incoming tweets. To increase the usability of the framework, GeoHashViz has been integrated into CyberGIS Gateway, a leading online cyberGIS environment for a large number of users to perform computing- and data-intensive, and collaborative geospatial problem-solving enabled by advanced cyberinfrastructure.


Thursday July 30, 2015 10:00am - 10:30am CDT
Majestic B

10:00am CDT

Market-Based On Demand Scheduling (MBoDS) in a Shared Virtual Compute Facility
Computational and data scientists at universities have different job resource requirements. Most universities maintain a set of shared resources to support these computational needs. Access to these resources are often free and the access policy is First Come First Serve (FCFS). However, FCFS policies on shared resources often lead to sub-optimal value from the organization’s point of view as different jobs contribute different values to the users. Furthermore, the set of resources at a single institution may fail to satisfy the diverse needs of the institutions researchers. We argue the solution is differentiated quality of service (QoS) based on the user’s willingness to pay to rationalize resource usage and federation of university resources to improve both the size of the resource pool as well as the diversity of resources. The proposed XSEDE Campus Bridging Shared Virtual Compute Facility provides both. The CB-SVCF will be deployed using existing XSEDE Execution Management Services (EMS) and the XSEDE Global Federated File System (GFFS).
 
Before deploying the CB-SVCF it is critical to understand and demonstrate the expected benefits to stakeholders under different load, pricing, and priority scenarios. We have developed a simulator to understand these trade-offs. In this paper we present the simulation results with two qualities of service and data traces from two universities for a month. Our result shows that MBoDS outperforms both FCFS and simple priority scheduling policy for a single site both in terms of overall value and in terms of number of high priority jobs started within the predefined threshold.


Thursday July 30, 2015 10:00am - 10:30am CDT
Majestic F

10:30am CDT

The VAT: Enhancing Video Analysis
The practice of extracting knowledge from large volumes of video data suffers from a problem of variety. Security, military, and commercial identification and retrieval are well-traveled paths for identifying very particular objects or people found in recorded footage, yet there are extremely few established technology solutions and use cases for understanding what large-scale video collections can help us discover about contemporary culture and history. This dearth is not due to a lack of imagination on the part of researchers; rather, we contend, in order to grow a common set of instruments, measures, and procedural methods, there is a need for a common gateway into content and analytics for cultural and historical experts to utilize. The Video Analysis Tableau (VAT), formerly the LSVA, is a research project aimed at establishing a software workbench for video analysis, annotation, and visualization, using both current and experimental discovery methods and built on the Medici framework/interface. The VAT employs a host of algorithms for machine reading, in addition to spaces for user generated tagging and annotation; it is currently being expanded into a gateway project in order to foster a strong community of practice that includes researchers in a variety of disciplines.


Thursday July 30, 2015 10:30am - 11:00am CDT
Majestic C

10:30am CDT

On Fostering a Culture of Research Cyberinfrastructure Grant Proposals within a Community of Service Providers in an EPSCoR State
Prior to the early 2000s, Oklahoma institutions had a very limited history of submitting research Cyberinfrastructure (CI) grant proposals. But over the past decade, this trend has dramatically reversed: beginning in 2003, Oklahoma institutions have submitted 15 proposals for research CI, including eight National Science Foundation (NSF) Major Research Instrumentation Proposals, an NSF Experimental Program for the Stimulation of Competitive Research (EPSCoR) Research Infrastructure Improvement (RII) Track-2 proposal with a strong CI focus (jointly with Kansas), an NSF EPSCoR RII Cyber Connectivity (C2) proposal, and five NSF Campus Cyberinfrastructure proposals (one in 2013, two in 2014 and two in 2015). Of these 15 proposals, 10 have been awarded, 2 have been declined and three are pending, a record that far outstrips the state's Fiscal Year 2014 NSF average of 20%. The evolution of Oklahoma's approach, and the growth of the community of CI service providers in the state, are the focus of this paper.


Thursday July 30, 2015 10:30am - 11:00am CDT
Majestic A

10:30am CDT

Computational Discovery Desktop
Computational Discovery Desktop (CDD) is an application-independent, open framework that aids researchers in the daily interaction with distributed HPC resources. It provides the computational project management at personal, research group, and inter-institutional collaboration levels. The framework provides a seamless integration of typical operations including multi-user access to the project data, application deployment, input preparation, job scheduling, state gathering, data archival, etc. into the daily work of the researcher by offering a common interface to the frequently performed tasks. The implemented scheduling options support an extensive logic for job dependencies and submitting and cancelling a job at specific time and condition. The job state monitor ascertains the job progress and includes the ability to terminate the job if a predefined condition is met, e.g. the job is too slow due to system issues. The interface can easily be expanded to include additional operations. The users are free to choose the desired level of integration of their day-to-day operations into the CDD framework based on their individual habits and experience of interaction with HPC resources. CDD relies on a relational MySQL database backend. It uses cron utility to perform the scheduled off-line tasks, and to create periodic reports about job and project states. CDD uses XML format to manage the input files and to store the post-processed metadata. It comes with an extensive set of command-line tools for interaction with the user, which are written in Perl to simplify the modification and to avoid compilation. CDD employs Globus Toolkit to conduct the certificate-based authentication and data transfer. Current configured applications include NAMD, Amber, GROMACS, LAMMPS, and NWChem. The application is tested on NCSA Blue Waters and TACC Ranger.


Thursday July 30, 2015 10:30am - 11:00am CDT
Majestic B

10:30am CDT

Integrating Apache Spark Into PBS-Based HPC Environments
This paper describes an effort at the University of Tennessee’s National Institute for Computational Sciences (NICS) to integrate Apache Spark into the widely used TORQUE HPC batch environment. The similarities and differences between the execution of a Spark program and that of an MPI program on a cluster are used to motivate how to implement Spark/TORQUE integration. An implementation of this integration, pbs-spark-submit, is described, including demonstrations of functionality on two HPC clusters and a large shared-memory system.


Thursday July 30, 2015 10:30am - 11:00am CDT
Majestic F

11:00am CDT

Grouping Game Players Using Parallelized K-Means On Supercomputers
Grouping game players based on their online behaviors has attracted a lot of attention recently. However, due to the huge volume and extreme complexity in online game data collections, grouping players is a challenging task. This study has applied parallelized K-Means on Gordon, a supercomputer hosted at San Diego Supercomputer Center, to meet the computational challenge on this task. By using the parallelization functions supported by R, this study was able to cluster 120,000 game players into eight non-overlapping groups and speed up the clustering process by one to four times under the two- to eight-degree of parallelization. This study has systematically examined a number of factors which may affect the quality of the clusters and/or the performance of the clustering processes; those factors include degree of parallelism, number of clusters, data dimensions, and variable combinations. Besides demonstrating the effectiveness of parallelized K-Means in grouping game players, this study also highlights some lessons learned for using K-Means on very large datasets and some experience on applying parallel processing techniques in intensive data analysis.


Thursday July 30, 2015 11:00am - 11:30am CDT
Majestic C

11:00am CDT

Science Gateways For Humanities, Arts, and Social Science
In this paper, I describe some of my experiences as XSEDE’s Humanities Specialist and how they have led toward an effort within XSEDE to create a collection of Science Gateways that are tailored to the needs of various communities within the Humanities, Arts, and Social Sciences. This paper provides an update of that effort.

Speakers

Thursday July 30, 2015 11:00am - 11:30am CDT
Majestic A

11:00am CDT

Leveraging DiaGrid Hub for Interactively Generating and Running Parallel Programs
Interactive Parallelization Tool (IPT) is a semi-automatic tool that can be used by domain experts and students for transforming certain classes of existing applications into multiple parallel variants. An end-user of IPT provides existing application and high-level specifications for parallelization as input. On the basis of the specifications provided by the end-user, IPT carries out the code changes in the given existing application to generate parallel variants that can be run on different High Performance Computing (HPC) platforms. The parallel programming paradigms that are currently supported by IPT are MPI, OpenMP, and CUDA. The supported base languages are C and C++. Though IPT is still under active development, it has been recently made available on a web-enabled platform, named DiaGrid Hub, with the support from the XSEDE Extended Collaborative Support Service (ECSS). While the main goal of IPT is to make parallel programming easy for its end-users, the main goal of DiaGrid Hub is to enable the research community with instant access to HPC and High Throughput Computing platforms through a user-friendly web-interface. By deploying IPT on DiaGrid Hub, our goal is to enable the end-users to generate parallel versions of their existing applications without having to install IPT locally. They can also immediately compile and run the generated applications on Purdue and XSEDE resources that are available through DiaGrid Hub. Hence, the collaborative project that is reported in this paper lowers the entry-barriers to parallel programming and the usage of the National CyberInfrastructure (NCI). In this paper, we present our ongoing work on deploying IPT over DiaGrid and testing the usability of IPT through a web-interface.


Thursday July 30, 2015 11:00am - 11:30am CDT
Majestic B

11:00am CDT

Advanced User Environment Design and Implementation on Integrated Multi-architecture Supercomputers
Scientists and engineers using supercomputer clusters should be able to focus on their scientific and technical work instead of worrying about operating their user environment. However, creating a convenient and effective user environment on modern supercomputers becomes more and more challenging due to the complexity of these large-scale systems. In this report, we discuss important design issues and goals in user environment that must support multiple compiler suites, various applications, and diverse libraries on heterogeneous computing architectures. We present our implementation on the latest high-performance computing system, Yellowstone, which is a powerful dedicated resource for earth system science deployed by the National Center for Atmospheric Research. Our newly designed user environment is built upon a hierarchical module structure, customized wrapper scripts, pre-defined system modules, Lmod modules implementation, and several creative tools. The resulting implementation realizes many great features including streamlined control, versioning, user customization, automated documentation, etc., and accommodates both novice and experienced users. The design and implementation also minimizes the effort of the administrator and support team in managing users environment. The smooth application and positive feedback from our users demonstrate that our design and implementation on the Yellowstone system have been well accepted and have facilitated thousands of users all over the world.


Thursday July 30, 2015 11:00am - 11:30am CDT
Majestic F

12:00pm CDT

Lunch and Awards Ceremony
Thursday July 30, 2015 12:00pm - 2:00pm CDT
Majestic D&E
 
Filter sessions
Apply filters to sessions.