Data correction using Hamming coding and hash function and its CUDA implementation

2019;
: pp. 100 - 104
1
Lviv Polytechnic National University, Computer Engineering Department
2
Lviv Polytechnic National University

This article deals with the use of block code for the entire amount of data. A hash function is used to increase the number of errors that can be detected. The automatic parallelization of this code by using special means is considered

[1] History of Hamming Codes. Archived from the original on 2007- 10-25. Retrieved 2008-04-03.

[2] NVIDIA CUDA Programming Guide, Version 2.1, 2008

[3] M. M. Baskaran, U. Bondhugula, S. Krishnamoorthy, J. Ramanujam, A. Rountev, and P. Sadayappan. A Compiler Framework for Optimization of Affine Loop Nests for GPGPUs. In Proc. International Conference on Supercomputing, 2008.
https://doi.org/10.1145/1375527.1375562

[4] N. Fujimoto. Fast Matrix-Vector Multiplication on GeForce 8800 GTX. In Proc. IEEE International Parallel & Distributed Processing Symposium, 2008
https://doi.org/10.1109/IPDPS.2008.4536350

[5] N. Govindaraju, B. Lloyd, Y. Dotsenko, B. Smith, and J. Manferdelli. High performance discrete Fourier transforms on graphics processors. In Proc. Supercomputing, 2008.
https://doi.org/10.1109/SC.2008.5213922

[6] G. Ruetsch and P. Micikevicius. Optimize matrix transpose in CUDA. NVIDIA, 2009.

[7] S. Ueng, M. Lathara, S. S. Baghsorkhi, and W. W. Hwu. CUDA- lite: Reducing GPU programming Complexity, In Proc. Workshops on Languages and Compilers for Parallel Computing, 2008
https://doi.org/10.1007/978-3-540-89740-8_1

[8] V. Volkov and J. W. Demmel. Benchmarking GPUs to tune dense linear algebra. In Proc. Supercomputing, 2008.
https://doi.org/10.1109/SC.2008.5214359

[9] J. A. Stratton, S. S. Stone, and W. W. Hwu. MCUDA:An efficient implementation of CUDA kernels on multicores. IMPACT Technical Report IMPACT-08-01, UIUC, Feb. 2008.
https://doi.org/10.1007/978-3-540-89740-8_2

[10] S. Ryoo, C. I. Rodrigues, S. S. Stone, S. S. Baghsorkhi, S. Ueng, J.A. Stratton, and W. W. Hwu. Optimization space pruning for a multithreaded GPU. In Proc. International Symposium on Code Generation and Optimization, 2008.
https://doi.org/10.1145/1356058.1356084

[11] S. Ryoo, C. I. Rodrigues, S. S. Baghsorkhi, S. S. Stone, D. B. Kirk, and W.W. Hwu. Optimization principles and application performance evaluation of a multithreaded GPU using CUDA. In Proc. ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, 2008.
https://doi.org/10.1145/1345206.1345220

[12] S. Hong and H. Kim. An analytical model for GPU architecture with memory-level and thread-level parallelism awareness. In Proc. International Symposium on Computer Architecture, 2009.
https://doi.org/10.1145/1555754.1555775