TY - GEN
T1 - Optimizing dynamic programming on graphics processing units via data reuse and data prefetch with inter-block barrier synchronization
AU - Wu, Chao Chin
AU - Wei, Kai Cheng
AU - Lin, Ting Hong
PY - 2012/12/1
Y1 - 2012/12/1
N2 - Our previous study focused on accelerating an important category of DP problems, called nonserial polyadic dynamic programming (NPDP), on a graphics processing unit (GPU). In NPDP applications, the degree of parallelism varies significantly in different stages of computation, making it difficult to fully utilize the compute power of hundreds of processing cores in a GPU. To address this challenge, we proposed a methodology that can adaptively adjust the threadlevel parallelism in mapping a NPDP problem onto the GPU, thus providing sufficient and steady degrees of parallelism across different compute stages. This work aims at further improving the performance of NPDP problems. Subproblems and data are tiled to make it possible to fit small data regions into shared memory and reuse the buffered data for each tile of subproblems, thus reducing the amount of global memory access. However, we found invoking the same kernel many times, due to data consistency enforcement across different stages, makes it impossible to reuse the tiled data in shared memory after the kernel is invoked again. Fortunately, the inter-block synchronization technique allows us to invoke the kernel exactly one time with the restriction that the maximum number of blocks is equal to the total number of streaming multiprocessors. In addition to data reuse, invoking the kernel only one time also enables us to prefetch data to shared memory across inter-block synchronization point, which improves the performance more than data reuse. We realize our approach in a real-world NPDP application - the optimal matrix parenthesization problem. Experimental results demonstrate invoking a kernel only one time cannot guarantee performance improvement unless we also reuse and prefetch data across barrier synchronization points.
AB - Our previous study focused on accelerating an important category of DP problems, called nonserial polyadic dynamic programming (NPDP), on a graphics processing unit (GPU). In NPDP applications, the degree of parallelism varies significantly in different stages of computation, making it difficult to fully utilize the compute power of hundreds of processing cores in a GPU. To address this challenge, we proposed a methodology that can adaptively adjust the threadlevel parallelism in mapping a NPDP problem onto the GPU, thus providing sufficient and steady degrees of parallelism across different compute stages. This work aims at further improving the performance of NPDP problems. Subproblems and data are tiled to make it possible to fit small data regions into shared memory and reuse the buffered data for each tile of subproblems, thus reducing the amount of global memory access. However, we found invoking the same kernel many times, due to data consistency enforcement across different stages, makes it impossible to reuse the tiled data in shared memory after the kernel is invoked again. Fortunately, the inter-block synchronization technique allows us to invoke the kernel exactly one time with the restriction that the maximum number of blocks is equal to the total number of streaming multiprocessors. In addition to data reuse, invoking the kernel only one time also enables us to prefetch data to shared memory across inter-block synchronization point, which improves the performance more than data reuse. We realize our approach in a real-world NPDP application - the optimal matrix parenthesization problem. Experimental results demonstrate invoking a kernel only one time cannot guarantee performance improvement unless we also reuse and prefetch data across barrier synchronization points.
UR - http://www.scopus.com/inward/record.url?scp=84874098640&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84874098640&partnerID=8YFLogxK
U2 - 10.1109/ICPADS.2012.17
DO - 10.1109/ICPADS.2012.17
M3 - Conference contribution
AN - SCOPUS:84874098640
SN - 9780769549033
T3 - Proceedings of the International Conference on Parallel and Distributed Systems - ICPADS
SP - 45
EP - 52
BT - Proceedings of the 2012 IEEE 18th International Conference on Parallel and Distributed Systems, ICPADS 2012
T2 - 18th IEEE International Conference on Parallel and Distributed Systems, ICPADS 2012
Y2 - 17 December 2012 through 19 December 2012
ER -