Table of Links
II. Spim Performance, Advantages and Generality
III. Inherently Low Rank Problems
A. Properties of Low Rank Graphs
B. Weakly NP-Complete Problems and Hardware Precision Limitation
C. Limitation of Low Rank Matrix Mapping
IV. Low Rank Approximation
A. Decomposition of Target Coupling Matrix
C. Low Rank Approximation of Coupling Matrices
D. Low-Rank Approximation of Random Coupling Matrices
E. Low Rank Approximation for Portfolio Optimization
F. Low-Rank Matrices in Restricted Boltzmann Machines
V. Constrained Number Partitioning Problem
A. Definition and Characteristics of the Constrained Number Partitioning Problem
B. Computational Hardness of Random CNP Instances
VI. Translation Invariant Problems
VII. Conclusions, Acknowledgements, and References
A. Properties of Low Rank Graphs
Given the advantages of optical annealers based on spatial light modulation, as previously discussed, it is crucial to understand the structure of Ising coupling matrices. Every Ising coupling matrix J can be associated with the (weighted) adjacency matrix of an underlying (weighted) graph. One can then define the rank of a graph by identifying it with the rank of its adjacency matrix. For graphs with identical connectivity, the unweighted version will generally have a different rank from the weighted graph.
To consider weighted graphs, a generalization of the rank of a graph is given by the minimum rank mr(G), defined as
The authors of [34] proposed constructing Ising problems with tunably hard coupling matrices with exactly known rank. This family of constructed Ising problems is known as the Wishart planted ensemble, and they show a hardness peak for relatively small rank
R ≈ 1.63 + 0.073N,
where N is the number of spins. This is not ideal as the required rank will increase linearly with the size of the problem N to produce the hardest problems, but it could still serve as a benchmark for small-scale SPIMtype devices since at N ≈ 100, the hardest problem only has rank R ≈ 8.
B. Weakly NP-Complete Problems and Hardware Precision Limitation
The authors of [25] proposed a mapping from the knapsack problem with integer weights to an Ising problem with a coupling matrix J that can be represented by Eq. (4) with rank(J) = 2, which does not grow with the size of the problem. The problem is defined as follows: Given a set of items, each having value vi and integer weight wi > 0, we would like to find a subset of the items that maximize the total value of items in the subset while satisfying a constraint where the total weight of items in the subset is not greater than a given limit W. The optimization version of the knapsack problem with integer weights is known to be NP-hard [26]. Hence, it was argued that the linear combination method can efficiently implement the Ising formulation of NP-hard problems on SPIM.
Hence, to simulate a weakly NP-complete problem whose solution requires exponentially growing resources on a classical computer, the precision of SPIM optical hardware will need to grow to encode larger integers represented by more binary digits L involved in these problems. This is unlikely to be realized in experiments because the precision with which coupling matrices can be implemented in SPIM is a fixed number of significant digits, likely much smaller than problem input sizes of practical interest.
One can introduce the signed discrepancy D of the numbers, given the binary variables si [38] as
D can be interpreted as the final distance to the origin of a random walker in one dimension who takes steps to the left (si = −1) or to the right (si = +1) with random stepsizes (ni). One can calculate the average number of walks that ends at D as
where ⟨⋅⟩ denotes averaging over the random numbers n and δ is the Kronecker-delta function. For fixed {si} and large N, the distribution of D can be treated as Gaussian with mean
Hence, to the leading order in L, one can express the probability of the walk ending at a distance D as
This value, denoted as κc, is crucial for indicating the phase transition. When κ < κc, on average, there exists an exponential number of perfect partitions where the discrepancy D = 0; however, when κ > κc, none exists.
Another essential aspect of such a simple random walk model is the possibility of tracing the effects of finite size. For instance, even with a relatively small system size of around N ≈ 17 units, the critical value of κ ≈ 0.9 is due to the finite-size scaling window of the transition. These effects become more pronounced in hardware systems that operate with limited variables.
This statistical analysis aligns with our previous discussion, which suggested that to generate computationally hard NPP instances, the number of integers N and binary digits L must increase, which can be challenging in practice. Nevertheless, despite its limitations, NPP is a suitable platform for integrating additional modifications or constraints, making it adaptable for deployment on hardware with limited physical resources.
The multiprocessor scheduling problem [39] stands out among the conventional NPP modifications. It involves distributing the workload across parallel computers to minimize overall runtime. The transition can be detailed by mapping it onto a mean-field anti-ferromagnetic Potts model. In the multiprocessor scheduling problem, we are given q identical processors and N independent tasks with running times ai (instead of numbers ni). The objective is to create a schedule to assign the N tasks to the q processors to minimize the longest task finishing time
where s(i) denotes which processor task i is assigned to, and Aα represents the total workload of processor α. The critical factor is whether two tasks are on the same processor. It is convenient to encode the schedules using Potts vectors [40] since this reflects the symmetry of the problem. It was shown that this results in a different critical value for the easy-hard phase transition (that is less than 1 for q > 1):
Such a shift is beneficial in our context because one requires smaller precision to achieve the same complexity on the hardware.
A more straightforward approach is to consider the socalled constrained number partitioning problem (CNP), which will be discussed in more detail in Section V. It is a variation of the original NPP where, apart from splitting integers into groups with equal sums, we also aim to meet an additional requirement known as a cardinality constraint. This constraint ensures that the difference between the numbers of integers in one group and another equals a specific value.
The NPP is a good platform for various modifications, even though it is unsuitable for implementation on SPIM in its original form. Modifications can increase complexity within constrained resources and even combine them. For example, one could tackle the multiprocessor scheduling problem while adhering to cardinality constraints. This versatility allows for fine-tuning parameters to obtain tasks with specific complexity.
C. Limitation of Low Rank Matrix Mapping
-
Find approximate solutions to hard problems by approximating them with a low-rank matrix and then solving the approximate problem with SPIM. This is discussed in Section IV.
-
Identify NP problems whose precision requirement L and rank requirement K grow slowly as the problem size N increases while maintaining their hardness. One possible candidate problem is presented in Section V.
Authors:
(1) Richard Zhipeng Wang, Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA, United Kingdom;
(2) James S. Cummins, Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA, United Kingdom;
(3) Marvin Syed, Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA, United Kingdom;
(4) Nikita Stroev, Department of Physics of Complex Systems, Weizmann Institute of Science, Rehovot 76100, Israel;
(5) George Pastras, QUBITECH, Thessalias 8, Chalandri, GR 15231 Athens, Greece;
(6) Jason Sakellariou, QUBITECH, Thessalias 8, Chalandri, GR 15231 Athens, Greece;
(7) Symeon Tsintzos, QUBITECH, Thessalias 8, Chalandri, GR 15231 Athens, Greece and UBITECH ltd, 95B Archiepiskopou Makariou, CY 3020 Limassol, Cyprus;
(8) Alexis Askitopoulos, QUBITECH, Thessalias 8, Chalandri, GR 15231 Athens, Greece and UBITECH ltd, 95B Archiepiskopou Makariou, CY 3020 Limassol, Cyprus;
(9) Daniele Veraldi, Department of Physics, University Sapienza, Piazzale Aldo Moro 5, Rome 00185, Italy;
(10) Marcello Calvanese Strinati, Research Center Enrico Fermi, Via Panisperna 89A, 00185 Rome, Italy;
(11) Silvia Gentilini, Institute for Complex Systems, National Research Council (ISC-CNR), Via dei Taurini 19, 00185 Rome, Italy;
(12) Calvanese Strinati, Research Center Enrico Fermi, Via Panisperna 89A, 00185 Rome, Italy
(13) Davide Pierangeli, Institute for Complex Systems, National Research Council (ISC-CNR), Via dei Taurini 19, 00185 Rome, Italy;
(14) Claudio Conti, Department of Physics, University Sapienza, Piazzale Aldo Moro 5, Rome 00185, Italy;
(15) Natalia G. Berlof, Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA, United Kingdom ([email protected]).
This paper is