paint-brush
Memory Efficient Multithreaded Algorithm: Conclusions, Source Code & Referencesby@multithreading

Memory Efficient Multithreaded Algorithm: Conclusions, Source Code & References

tldt arrow

Too Long; Didn't Read

With the growing computational needs associated with prime enumeration tasks, traditional implementations of the Sieve algorithm remain a bottleneck.
featured image - Memory Efficient Multithreaded Algorithm: Conclusions, Source Code & References
MultiThreading.Tech: #1 Publication on Concurrent Programming HackerNoon profile picture

This paper is available on arxiv under CC 4.0 license.

Authors:

(1) Evan Ning, Milton Academy & [email protected];

(2) David R. Kaeli, Department of Electrical and Computer Engineering & Northeastern University [email protected].

6 Conclusions

In this paper, we explored how to better parallelize the execution of the Sieve of Eratosthenes. Clearly, both the arithmetic complexity of computation and the storage requirement play a big role in the computation speed of the standard implementation. However, our proposed algorithm is faster and more compact (requiring N/3 auxiliary storage), resulting in a more efficient parallel execution of the Sieve of Eratosthenes. We explored selecting the chunk size to optimize memory usage, which can avoid memory thrashing. Our incremental segmented approach enables us to handle huge numbers, almost approaching infinity. This is crucial for large-scale prime number computations and applications that require extensive use of primes.


Memory management in high-performance multi-threaded computations is extremely important. The cache sizes and physical memory constraints can have a huge impact on performance. We encountered this when selecting the size of our scratch memory. What we found is that there are tradeoffs in terms of selecting this size.


Next Steps for advancing the Multithreaded Sieve Algorithm Implementation:


  1. Prime Data Store File Writing: The current setup is poised for indefinite expansion. However, to efficiently manage and store results, there is a need to implement writing to files, ensuring the preservation of computed data. To address potential memory constraints, the prime base list will be segmented. Each segment will contain up to 125 million primes to prevent memory overflow and ensure efficient retrieval.


  2. Custom Big Integer Library: In order to sieve infinitely, a special library for large numbers needs to be created. While there are default big integer libraries, we can customize our own for further optimization. For example, we only divide by 6, so a special operation can be created. Furthermore, a special operation can be created to efficiently mod 6.


  3. Incorporation of Multiprocessing: To further optimize the computational process, the implementation of multiprocessing will be explored. This approach is to divide the tasks into multiple computation nodes. However, this comes with its set of challenges. One primary concern is minimizing data transfer between processes, ensuring coherence and consistency in results.

Source Code

GitHub: https://github.com/evanning1/Multi-thread_Prime_Sieve

References

[1] R.L. Rivest, A. Shamir, L. Adleman, “A Method for Obtaining Digital Signatures and Public-Key Cryptosystems,” (1978) Massachusetts Institute of Technology, https://people.csail.mit.edu/vinodv/COURSES/MAT302-S12/Rsapaper.pdf


[2] Xuedong Luo, “A Practical Sieve Algorithm Finding Prime Numbers,” (1989) Communications of the ACM, 32(3), March 1989, 344–346. https://doi.org/10.1145/62065.62072


[3] Torben Hansen, “Parallel Implementation of the Sieve of Eratosthenes,” F120116 Utrecht University - Institute of Mathematics, https://himsen.github.io/pdf/Project1_parallel_algorithms_Torben.pdf


[4] Mário Cordeiro, “Parallelization of the Sieve of Eratosthenes, Faculdade de Engenharia da Universidade, s/n 4200-465 Porto PORTUGAL .


[5] Jonathan Sorenson, “An Introduction to Prime Number Sieves,” Computer Sciences Technical Report #909, Department of Computer Sciences University of Wisconsin-Madison, January 2, 1990.


[6] POSIX Threads Programming https://hpc-tutorials.llnl.gov/posix/


[7] Research Computing at Northeastern University https://rc-docs.northeastern.edu/en/latest/