1.2 Asymptotic Notation (Big O)
1.5 Monte Carlo Simulation and Variance Reduction Techniques
3.2 Theorems and Model Discussion
Big ๐ notation, denoted as ๐(๐(๐)), is a mathematical representation widely used in computer science to describe the upper bound or worst-case behavior of algorithms and functions as the input size, denoted as n, approaches infinity. In essence, it characterizes a function's growth rate or an algorithm's time complexity [3].
Formally, for a given function ๐(๐),๐(๐(๐)) , represents the set of functions for which there exists positive constants c and nโ such that for all n greater than or equal to ๐0 , the function ๐(๐) is bounded above by ๐ times ๐(๐). Mathematically, it can be expressed as:
๐(๐(๐)) = { ๐(๐) โถ โ๐ > 0, โ๐0 > 0, ๐ ๐ข๐โ ๐กโ๐๐ก
0 โค ๐(๐) โค ๐๐(๐) โ ๐ โฅ ๐0}
In simpler terms, if a function ๐(๐) can be bounded by a constant multiple of ๐(๐) for sufficiently large values of n, then ๐(๐) belongs to the set ๐(๐(๐)).
Big ๐ notation provides a concise way to analyze and compare the efficiency of algorithms, focusing on their scalability and performance characteristics without getting bogged down in specific implementation details. By understanding the asymptotic behavior of algorithms, developers can make informed decisions about algorithm selection and optimization strategies, crucial for designing efficient and scalable software systems.
Authors:
(1) Agni Rakshit, Department of Mathematics, National Institute of Technology, Durgapur, Durgapur, India ([email protected]);
(2) Gautam Bandyopadhyay, Department of Management Studies, National Institute of Technology, Durgapur, Durgapur, India ([email protected]);
(3) Tanujit Chakraborty, Department of Science and Engineering & Sorbonne Center for AI, Sorbonne University, Abu Dhabi, United Arab Emirates ([email protected]).
This paper is