Personal tools
You are here: Home Workshops Summer 2007 Workshop on Petascale Architectures and Performance Strategies
Document Actions

Workshop on Petascale Architectures and Performance Strategies

by admin last modified 2007-10-29 04:08

Held July 23-26, 2007, Snowbird Ski and Summer Resort, Snowbird, Utah, USA

Workshop Agenda :: Workshop Slides

 

Organizers

  • Rusty Lusk (Argonne National Lab), "lusk" AT "mcs.anl.gov"
  • Bill Gropp (Argonne National Lab), "gropp" AT "mcs.anl.gov"
  • Pete Beckman (Argonne National Lab), "beckman" AT "mcs.anl.gov"

Abstract


Goals:
  • Become familiar with the architecture, operation, and usability issues for each of the DOE Leadership Class Facilities.
  • Understand application scaling bottlenecks on the systems.
  • Learn strategies for achieving good performance with message passing and I/O libraries.
  • Explore new programming models, languages, and techniques that can provide scalable performance.
  • Learn the tools and suggested strategies for understanding the performance of petascale applications.

Agenda

Day 1 - Monday, July 23

  • Morning: Application Forum.  We started with a quick summary of each of the attendees code, their issues, what they are seeing, what problems they have, etc.  We encouraged folks to send these in prior to the meeting, but we discussed them in the morning.  We worked to extract "challenge problems" -- understand from the user perspective their largest challenges.  What tools do they use now, what have they tried but gave up on?
  • Afternoon: The Leadership Class Platforms.  An overview of the facilities, the current status, the expansion plans, the problems, the roadmap, and operations.  Info included current "expected performance" that applications are seeing.

Day 2 - Tuesday, July 24

  • Morning. Programming for optimal performance with MPI on the machines.  What works, what does not, what collectives work well, when they do not, latencies, bandwidths, etc.  Mention OpenMP.
  • Afternoon. Parallel I/O libraries and strategies.  How to avoid the one-file-per-proc with MPI-IO, pNetCDF, and HDF5.
  • Late Night hacking.  Everyone explored and chatted about their codes.

Day 3 - Wednesday, July 25

  • Morning: Tools for performance.  HPCToolkit, TAU, DynInst, etc.  How tools can help locate bottlenecks.
  • Afternoon: Hands on... worked on MPI, performance tools, MPI-IO, whatever folks wanted.
  • Late Night hacking.  Continued deconstructing performance issues with grad students and coffee.

Day 4 - Thursday, July 26

  • Morning.  The future.  New languages (UPC, CAF).
  • Afternoon. Worked on workshop summary with participants.  Specifically, workshop attendees helped:
    • Challenge application kernels that can be used by hardware architects in designing and evaluation the next-generation systems, in the areas of node performance, internode communication, and I/O.
    • Identify strengths and weaknesses in available tool sets and support.

Sponsors

This workshop is sponsored by the Center for Scalable Application Development Software, with funding from the Scientific Discovery through Advanced Computing (SciDAC) program.

« April 2018 »
Su Mo Tu We Th Fr Sa
1234567
891011121314
15161718192021
22232425262728
2930
 

Powered by Plone

CScADS Collaborators include:

Rice University ANL UCB UTK WISC