Using hybrid MPI and OpenMP programming to optimize communications in parallel loop self-scheduling schemes for multicore PC clusters

Chao-Chin Wu, Lien-Fu Lai, Chao Tung Yang, Po Hsun Chiu

研究成果: Article同行評審

19 引文 斯高帕斯(Scopus)

摘要

Recently, a series of parallel loop self-scheduling schemes have been proposed, especially for heterogeneous cluster systems. However, they employed the MPI programming model to construct the applications without considering whether the computing node is multicore architecture or not. As a result, every processor core has to communicate directly with the master node for requesting new tasks no matter the fact that the processor cores on the same node can communicate with each other through the underlying shared memory. To address the problem of higher communication overhead, in this paper we propose to adopt hybrid MPI and OpenMP programming model to design two-level parallel loop self-scheduling schemes. In the first level, each computing node runs an MPI process for inter-node communications. In the second level, each processor core runs an OpenMP thread to execute the iterations assigned for its resident node. Experimental results show that our method outperforms the previous works.

原文English
頁(從 - 到)31-61
頁數31
期刊Journal of Supercomputing
60
發行號1
DOIs
出版狀態Published - 2012 四月 1

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • Software
  • Information Systems
  • Hardware and Architecture

指紋 深入研究「Using hybrid MPI and OpenMP programming to optimize communications in parallel loop self-scheduling schemes for multicore PC clusters」主題。共同形成了獨特的指紋。

引用此