Study Finds MAPF Decomposition Efficient Under Low Agent Density

Written by instancing | Published 2026/02/19
Tech Story Tags: multi-agent-systems | multi-agent-pathfinding | mapf-decomposition | collision-free-path-planning | ai-search-optimization | priority-based-search | distributed-planning-systems | dependency-graph-optimization

TLDRThis study evaluates the computational impact of decomposing Multi-Agent Path Finding (MAPF) instances into subproblems. Across 22,300 benchmark instances, decomposition generally adds minimal overhead—under 1 second and less than 1 MB of memory—while significantly reducing problem size in maps with abundant free grids. However, as agent density increases and free space narrows, decomposition becomes less effective, often collapsing back into a single unsplit problem. The results show decomposition is highly practical for sparse environments but limited in densely packed scenarios.via the TL;DR App

ABSTRACT

I. INTRODUCTION

II. RELATED WORKS

III. PRELIMINARIES

IV. METHODOLOGY

V. RESULTS OF DECOMPOSITION

VI. RESULTS OF DECOMPOSITION’S APPLICATION

VII. CONCLUSION, ACKNOWLEDGMENTS AND REFERENCES

V. RESULTS OF DECOMPOSITION

In theory, the decomposition of a MAPF instance reduces the cost of solving the instance, but it also incurs its own time and memory costs. Therefore, before analyzing how the decomposition of a MAPF instance contributes to solving the instance, it is essential to examine the time and memory usage of the decomposition process.

Specifically, we measure the peak memory usage as the memory usage, and record the memory usage of the program at every millisecond during implementation. However, due to the resolution of memory recording being 1KB, memory usage less than 1 KB is recorded as 0 MB. This phenomenon may occur under certain maps.

Furthermore, we analyze how each step of the decomposition process influences the overall decomposition by evaluating changes in the decomposition rate (maximum subproblem size / total number of agents) and the number of subproblems after each step. The steps are numbered sequentially, with the first

step marked as ”1” and subsequent steps indicated as ”1, 2, 3,” and so forth. We employ a classic MAPF dataset [18] as MAPF instances for our analysis, comprising 24 typical maps from the dataset. These maps feature an increasing number of agents, and each number of agents is randomly selected 100 times, resulting in a total of 22,300 MAPF instances.

The same instances are utilized to evaluate how decomposition influences typical MAPF methods. Experiments were conducted on a laptop running Ubuntu 20.04, equipped with a Ryzen 7 5800h (3.2GHz) CPU and 16GB of memory. All code is implemented in C++. The results of our experiments are presented in Fig. 3 and Fig. 4.

A. Decomposition rate

The decomposition rate is closely correlated with the time cost and memory usage in solving a MAPF instance, as both time cost and memory usage are primarily determined by the size of the largest subproblem.

  1. Performance under different maps: As depicted in Fig. 3, the final decomposition rate (i.e., the decomposition rate after step 3) demonstrates a consistent increase across almost all maps, with the exception of maps containing a high number

of free grids (e.g., den520, Berlin 1 256). In these maps, the number of free grids closely aligns with the number of agents, resulting in fewer opportunities to alter dependence paths and decompose the instance. Consequently, the size of the largest subproblem approaches that of the original MAPF instance. This limitation becomes increasingly significant as the number of agents grows, ultimately leading to a decomposition rate close to 1, indicating that decomposition becomes ineffective when agents are densely packed.

Conversely, in maps with a surplus of free grids compared to the number of agents, such as den520 and Berlin 1 256, decomposition remains highly effective despite increases in the number of agents. In these scenarios, the abundance of free grids results in numerous free grid groups, facilitating the generation of dependence paths that traverse multiple agents’ start and target locations.

2) How each step contribute to decomposition: As shown in Fig. 3, we observe that each step of the decomposition process contributes to a decrease in the decomposition rate (i.e., a reduction in the maximum size of subproblems) overall. However, the extent of their contributions varies across different maps.

In maps characterized by a surplus of free grids, exceeding the number of agents (such as Berlin 1 256, Paris 1 256, and Den520d), the maximum size of clusters is reduced to merely 1 after step 1, even with hundreds of agents. This is due to the high likelihood of an agent’s initial dependence path avoiding passing through other agents’ start or target locations. Consequently, subsequent steps 2 and 3 have no further room for decomposition, resulting in similar time costs after each step in these maps.

Conversely, in maps where the number of free grids is close to the number of agents (e.g., empty 16 16, empty 32 32, random-32-32-20, and maze-32-32-4), steps 2 and 3 contribute more to decomposition than step 1 as the number of agents increases. This is because there is a lower likelihood of an agent’s initial dependence path avoiding passing through other agents’ start or target locations. Consequently, steps 2 and 3 have the opportunity to decompose subproblems by updating dependence paths.

B. Number of subproblems

The count of subproblems generally follows a pattern of initial increase followed by a decrease as the number of agents increases. Initially, decomposition easily generates small subproblems, leading to an increase in their count as the number of agents grows. However, as agents become denser, decomposing becomes more challenging, resulting in an increase in the decomposition rate and a subsequent decrease in the number of subproblems until it equals 1, indicating the raw MAPF instance.

Special cases arise in maps with numerous free grids, exceeding the number of agents, such as den520d and Berlin 1 256. In these instances, decomposition divides the raw instance into subproblems, each containing only one agent, resulting in the number of subproblems equaling the number of agents in the raw MAPF instance. However, it is predictable that as the number of agents increases, decomposition will become increasingly ineffective, ultimately reducing the number of subproblems to 1.

C. Time cost and memory usage

Time cost and memory usage are crucial factors in the application of decomposition for MAPF. While theoretically, decomposing a MAPF instance should reduce the time cost and memory usage of solving the problem, if the decomposition process itself consumes too much time or memory space, it may not effectively reduce the total cost of solving the MAPF instance. So we analysis how many resources decomposition cost under various of maps.

Generally, as depicted in the Fig. 3 and Fig. 4, the time cost of decomposition is less than 1 second in most maps and less than 3 seconds in the worst-case scenario (such as 800 to 1000 dense agents), which is deemed acceptable for practical application. In the majority of cases, all three steps contribute to the time cost. However, there are differences in the time cost of steps across different maps. In maps with an abundance of free grids exceeding the number of agents, such as den512d, Berlin 1 256, and Paris 1 256, step 1 consumes almost all of the time cost, resulting in overlapping time costs after step 1, step 2, and step 3.

Conversely, in maps with few free grids close to the number of agents, all steps contribute to the time cost, as these scenarios require steps 2 and 3 to decompose initial subproblems into smaller subproblems. Regarding memory usage, decomposition for almost all maps consumes less than 1 MB, which is considered acceptable for common platforms such as laptops or industrial process computers.

D. Summary

Our method demonstrates high effectiveness for MAPF instances with free grids exceeding two times the number of agents, while its effectiveness diminishes as the number of free grids approaches two times the number of agents. This is because a higher number of free grids increases the likelihood of decomposing the instance into smaller subproblems. Conversely, for the same map, the effectiveness of decomposition decreases with an increase in the number of agents in the MAPF instance.

Regarding subproblems, in most maps, the number of subproblems initially increases and then decreases as the number of agents increases. Initially, with only a few agents, decomposition is effective, but as the number of agents increases, decomposition becomes less effective. In terms of costs, the average time cost is less than 1 second in average, with a maximum of less than 3 seconds in the worst cases (such as 800 to 1000 dense agents), and memory usage remains below 1 MB. Comparatively, maps with more free grids than the number of agents require fewer computations and less memory space.

Authors:

  1. Zhuo Yao
  2. Wei Wang

This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.


Written by instancing | Pioneering instance management, driving innovative solutions for efficient resource utilization, and enabling a more sus
Published by HackerNoon on 2026/02/19