paint-brush
Assessing the Justification for Integrating Deep Learning in Combinatorial Optimizationby@heuristicsearch
160 reads

Assessing the Justification for Integrating Deep Learning in Combinatorial Optimization

tldt arrow

Too Long; Didn't Read

This summary highlights the importance of conducting thorough comparisons between deep learning-integrated heuristics and classical heuristics in combinatorial optimization. It emphasizes the need to articulate both strengths and weaknesses to guide research investments and justify the integration of deep learning architectures.
featured image - Assessing the Justification for Integrating Deep Learning in Combinatorial Optimization
Aiding in the focused exploration of potential solutions. HackerNoon profile picture

Authors:

(1) Ankur Nath, Department of Computer Science and Engineering, Texas A&M University;

(2) Alan Kuhnle, Department of Computer Science and Engineering, Texas A&M University.

Abstract & Introduction

Related work

Evaluation for Max-Cut

Evaluation for SAT

Summary and Outlook, References

Supplementary Materials

5 SUMMARY and OUTLOOK

Through our empirical evaluations, our goal is to promote an insightful comparison within the research focusing on the intersection of combinatorial optimization and machine learning. In order to provide the research community with valuable guidance, we believe it is imperative to communicate both the strengths and weaknesses of the proposed approaches. Poor instances and baseline selection may give the wrong impression about the performance of learned heuristics. Specifically, it is important to articulate the degree of improvement achieved through integrating classical heuristics with deep learning architectures and conduct a thorough comparison with classical heuristics. This will aid in elucidating the degree to which deep learning architectures enhance integrated heuristics. It assists in ascertaining whether the integration endeavor is justified and warrants the allocation of computational resources, time, and investment necessary for integrating deep learning with classical heuristics.

References

Albert, R. and Barab´asi, A.-L. (2002). Statistical mechanics of complex networks. Reviews of modern physics, 74(1):47.


Barrett, T., Clements, W., Foerster, J., and Lvovsky, A. (2020). Exploratory combinatorial optimization with reinforcement learning. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 3243–3250.


Barrett, T. D., Parsonson, C. W., and Laterre, A. (2022). Learning to solve combinatorial graph partitioning problems via efficient exploration. arXiv preprint arXiv:2205.14105.


Battiti, R. and Protasi, M. (1997). Reactive search, a history-sensitive heuristic for max-sat. Journal of Experimental Algorithmics (JEA), 2:2–es.


Battiti, R. and Tecchiolli, G. (1994). The reactive tabu search. ORSA journal on computing, 6(2):126–140.


Bello, I., Pham, H., Le, Q. V., Norouzi, M., and Bengio, S. (2016). Neural combinatorial optimization with reinforcement learning. arXiv preprint arXiv:1611.09940.


Bengio, Y., Lodi, A., and Prouvost, A. (2021). Machine learning for combinatorial optimization: a methodological tour d’horizon. European Journal of Operational Research, 290(2):405–421.


Benlic, U. and Hao, J.-K. (2013). Breakout local search for the max-cutproblem. Engineering Applications of Artificial Intelligence, 26(3):1162–1173.


Boettcher, S. and Percus, A. G. (2001). Extremal optimization for graph partitioning. Physical Review E, 64(2):026114.


CPLEX (2023). Cplex. https://www.ibm.com/products/ilog-cplexoptimization-studio/cplex-optimizer.


Dong, Y., Goldberg, A. V., Noe, A., Parotsidis, N., Resende, M. G., and Spaen, Q. (2021). New instances for maximum weight independent set from a vehicle routing application. In Operations Research Forum, volume 2, pages 1–6. Springer.


E´en, N. and S¨orensson, N. (2003). An extensible satsolver. In International conference on theory and applications of satisfiability testing, pages 502–518. Springer.


Elsokkary, N., Khan, F. S., La Torre, D., Humble, T. S., and Gottlieb, J. (2017). Financial portfolio

management using d-wave quantum optimizer: The case of abu dhabi securities exchange. Technical report, Oak Ridge National Lab.(ORNL), Oak Ridge, TN (United States).


Erd˝os, P., R´enyi, A., et al. (1960). On the evolution of random graphs. Publ. math. inst. hung. acad. sci, 5(1):17–60.


Festa, P. and Resende, M. G. (2009). Hybrid grasp heuristics. In Foundations of Computational Intelligence Volume 3: Global Optimization, pages 75–100. Springer.


Flint, A. and Blaschko, M. (2012). Perceptron learning of sat. Advances in neural information processing systems, 25.


Fujishige, S. (2005). Submodular functions and optimization. Elsevier.


Ganesh, V., Singh, R., Near, J. P., and Rinard, M. (2009). Avatarsat: An auto-tuning boolean sat solver.


Glover, F. (1990). Tabu search: A tutorial. Interfaces, 20(4):74–94.


Goemans, M. X. and Williamson, D. P. (1995). Improved approximation algorithms for maximum

cut and satisfiability problems using semidefinite programming. Journal of the ACM (JACM), 42(6):1115–1145.


Grozea, C. and Popescu, M. (2014). Can machine learning learn a decision oracle for np problems?

a test on sat. Fundamenta Informaticae, 131(3- 4):441–450.


Haim, S. and Walsh, T. (2009). Restart strategy selection using machine learning techniques. In International Conference on Theory and Applications of Satisfiability Testing, pages 312–325. Springer.


Helmberg, C. and Rendl, F. (2000). A spectral bundle method for semidefinite programming. SIAM Journal on Optimization, 10(3):673–696.


Hoos, H. H. et al. (2002). An adaptive noise mechanism for walksat. In AAAI/IAAI, pages 655–660.


Jaszczur, S., Luszczyk, M., and Michalewski, H. (2020). Neural heuristics for sat solving. arXiv preprint arXiv:2005.13406.


Karp, R. M. (1972). Reducibility among Combinatorial Problems, pages 85–103. Springer US, Boston, MA.


Khalil, E., Dai, H., Zhang, Y., Dilkina, B., and Song, L. (2017). Learning combinatorial optimization algorithms over graphs. Advances in neural information processing systems, 30.


KhudaBukhsh, A. R., Xu, L., Hoos, H. H., and LeytonBrown, K. (2016). Satenstein: Automatically building local search sat solvers from components. Artificial Intelligence, 232:20–42.


Kirkpatrick, S., Gelatt Jr, C. D., and Vecchi, M. P. (1983). Optimization by simulated annealing. science, 220(4598):671–680.


Kurin, V., Godil, S., Whiteson, S., and Catanzaro, B. (2020). Can q-learning with graph networks learn a generalizable branching heuristic for a sat solver? Advances in Neural Information Processing Systems, 33:9608–9621.


Lauria, M., Elffers, J., Nordstr¨om, J., and Vinyals, M. (2017). Cnfgen: A generator of crafted benchmarks. In Theory and Applications of Satisfiability Testing–SAT 2017: 20th International Conference, Melbourne, VIC, Australia, August 28–September 1, 2017, Proceedings 20, pages 464–473. Springer.


Lederman, G., Rabe, M., Seshia, S., and Lee, E. A. (2019). Learning heuristics for quantified boolean

formulas through reinforcement learning. In International Conference on LearningRepresentations.


Leleu, T., Yamamoto, Y., McMahon, P. L., and Aihara, K. (2019). Destabilization of local minima in analog spin systems by correction of amplitude heterogeneity. Physical review letters,122(4):040607.


Li, C. M., Wei, W., and Zhang, H. (2007). Combining adaptive noise and look-ahead in local search for sat. In Theory and Applications of Satisfiability Testing– SAT 2007: 10th International Conference, Lisbon, Portugal, May 28-31, 2007. Proceedings 10, pages 121–133. Springer.


Liang, J. H., Ganesh, V., Poupart, P., and Czarnecki, K. (2016). Learning rate based branching heuristic for sat solvers. In Theory and Applications of Satisfiability Testing–SAT 2016: 19th International Conference, Bordeaux, France, July 5-8, 2016, Proceedings 19, pages 123–140. Springer.


Manchanda, S., Mittal, A., Dhawan, A., Medya, S., Ranu, S., and Singh, A. (2019). Learning heuristics over large graphs via deep reinforcement learning. arXiv preprint arXiv:1903.03332.


Mazure, B., Sais, L., and Gr´egoire, E. (1997). Tabu ´ search for sat. In Proceedings of the fourteenth national conference on artificial intelligence and ninth conference on Innovative applications of artificial intelligence, pages 281–285.


McAllester, D., Selman, B., Kautz, H., et al. (1997). Evidence for invariants in local search. In AAAI/IAAI, pages 321–326. Rhode Island, USA.


Mitchell, D., Selman, B., Levesque, H., et al. (1992). Hard and easy distributions of sat problems. In Aaai, volume 92, pages 459–465.


Perdomo-Ortiz, A., Dickson, N., Drew-Brook, M., Rose, G., and Aspuru-Guzik, A. (2012). Finding low-energy conformations of lattice protein models by quantum annealing. Scientific reports, 2(1):1–7.


Selman, B., Kautz, H. A., Cohen, B., et al. (1993). Local search strategies for satisfiability testing. Cliques, coloring, and satisfiability, 26:521–532.


Selman, B., Mitchell, D. G., and Levesque, H. J. (1996). Generating hard satisfiability problems. Artificial intelligence, 81(1-2):17–29.


Tiunov, E. S., Ulanov, A. E., and Lvovsky, A. (2019). Annealing by simulating the coherent ising machine. Optics express, 27(7):10288–10295.


Toenshoff, J., Ritzert, M., Wolf, H., and Grohe, M. (2019). Graph neural networks for maximum constraint satisfaction. Frontiers in artificial intelligence, 3:580607.


Venturelli, D. and Kondratyev, A. (2019). Reverse quantum annealing approach to portfolio optimization problems. Quantum Machine Intelligence, 1(1- 2):17–30.


Wolpert, D. H. and Macready, W. G. (1997). No free lunch theorems for optimization. IEEE transactions on evolutionary computation, 1(1):67–82.


Ye, Y. (2003). The gset dataset. https://web.stanford.edu/ yyye/yyye/Gset/.


Yolcu, E. and P´oczos, B. (2019). Learning local search heuristics for boolean satisfiability. Advances in Neural Information Processing Systems, 32.


This paper is available on arxiv under CC 4.0 DEED license.