Personal tools
You are here: Home Publications Optimizing Collective Communication on Multicores
Document Actions

Rajesh Nishtala and Katherine A Yelick (2009)

Optimizing Collective Communication on Multicores

In: Hot Topics in Parallelism .

As the gap in performance between the processors and the memory systems continue to grow, the communication component of an application will dictate the overall application performance and scalability. Therefore it is useful to abstract common communication operations across cores as collective communication operations and tune them through a runtime library that can employ sophisticated automatic tuning techniques. Our focus of this paper is on collective communication in Partitioned Global Address Space languages which are a natural extension of the shared memory hardware of modern multicore systems. In particular we highlight how automatic tuning can lead to significant performance improvements and show how loosening the synchronization semantics of a collective can lead to a more efficient use of the memory system. We demonstrate that loosely synchronized collectives can realize consistent speedups over their strictly synchronized counterparts on the highly threaded Sun Niagara2 for message sizes ranging from 8 bytes to 64kB. We thus argue that the synchronization requirements for a collective must be exposed in the interface so the collective and the synchronization can be optimized together.
Mar 30, 2009 - Mar 31, 2009, Berkeley, CA
by Jennifer Harris last modified 2009-04-23 10:37
« March 2017 »
Su Mo Tu We Th Fr Sa
1234
567891011
12131415161718
19202122232425
262728293031
 

Powered by Plone

CScADS Collaborators include:

Rice University ANL UCB UTK WISC