paint-brush
Conclusions and Discussions, Acknowledgment and Referencesby@heuristicsearch
181 reads

Conclusions and Discussions, Acknowledgment and References

tldt arrow

Too Long; Didn't Read

Semantic SLAM involves the extraction and integration of semantic understanding with geometric data to produce detailed, multi-layered maps.
featured image - Conclusions and Discussions, Acknowledgment and References
Aiding in the focused exploration of potential solutions. HackerNoon profile picture

This paper is available on arxiv under CC BY-NC-ND 4.0 DEED license.

Authors:

(1) Akash Chikhalikar, Graduate School of Engineering, Department of Robotics, Tohoku University;

(2) Ankit A. Ravankar, Graduate School of Engineering, Department of Robotics, Tohoku University;

(3) Jose Victorio Salazar Luces, Graduate School of Engineering, Department of Robotics, Tohoku University;

(4) Yasuhisa Hirata, Graduate School of Engineering, Department of Robotics, Tohoku University.

VI. CONCLUSIONS AND DISCUSSIONS

In this study, we have successfully demonstrated the feasibility of target search in indoor environments by effectively locating multiple targets using a novel heuristic. Our proposed approach offers a reliable region-to-region navigation strategy that can accommodate user preferences during the search. The region-to-region navigation is efficient in terms of time as well as energy. It is inherently more robust to obstacles and occlusions as compared to point-topoint navigation. Furthermore, our system can perform these tasks in real-time and is suitable for small to medium sized indoor spaces such as homes and offices.


The system can be improved using decision making strategies that optimize long horizon navigation planners. Another non-trivial extension of this work is to add a manipulator to the system that can perform high-level object pickup tasks and extend the current system to accommodate more complex and challenging environments.

ACKNOWLEDGMENT

This work was partially supported by JST Moonshot R&D [Grant Number JPMJMS2034], JSPS Kakenhi [Grant Number JP21K14115], and JST SPRING [Grant Number JPMJSP2114].

REFERENCES

[1] D. Belanche, L. V. Casalo, C. Flavi ´ an, and J. Schepers, “Service robot ´ implementation: a theoretical framework and research agenda,” The Service Industries Journal, vol. 40, no. 3-4, pp. 203–225, 2020.


[2] A. A. Ravankar, S. A. Tafrishi, J. V. S. Luces, F. Seto, and Y. Hirata, “Care: Cooperation of ai robot enablers to create a vibrant society,” IEEE Robotics & Automation Magazine, 2022.


[3] M. Kim, S. Kim, S. Park, M.-T. Choi, M. Kim, and H. Gomaa, “Service robot for the elderly,” IEEE robotics & automation magazine, vol. 16, no. 1, pp. 34–45, 2009.


[4] R. Martins, D. Bersan, M. Campos, and et al., “Extending maps with semantic and contextual object information for robot navigation: a learning-based framework using visual and depth cues,” Journal of Intelligent and Robotic Systems, vol. 99, p. 555–569, 2020.


[5] M. Hayat, S. H. Khan, M. Bennamoun, and S. An, “A spatial layout and scale invariant feature representation for indoor scene classification,” IEEE Transactions on Image Processing, vol. 25, no.10, pp. 4829–4841, 2016.


[6] A. Rosinol, A. Violette, M. Abate, N. Hughes, Y. Chang, J. Shi, A. Gupta, and L. Carlone, “Kimera: From slam to spatial perception with 3d dynamic scene graphs,” The International Journal of Robotics Research, vol. 40, no. 12-14, pp. 1510–1546, 2021.


[7] C.-Y. Wang, A. Bochkovskiy, and H.-Y. M. Liao, “Yolov7: Trainable bag-of-freebies sets new state-of-the-art for re


[8] K. He, G. Gkioxari, P. Dollar, and R. Girshick, “Mask r-cnn,” in ´ Proceedings of the IEEE international conference on computer vision, pp. 2961–2969, 2017.


[9] D. Fernandez-Chaves, J.-R. Ruiz-Sarmiento, N. Petkov, and J. Gonzalez-Jimenez, “Vimantic, a distributed robotic architecture for semantic mapping in indoor environments,” Knowledge-Based Systems, vol. 232, p. 107440, 2021.


[10] S. Hasegawa, A. Taniguchi, Y. Hagiwara, L. El Hafi, and T. Taniguchi, “Inferring place-object relationships by integrating probabilistic logic and multimodal spatial concepts,” in 2023 IEEE/SICE International Symposium on System Integration (SII), pp. 1–8, 2023.


[11] D. Batra, A. Gokaslan, A. Kembhavi, O. Maksymets, R. Mottaghi, M. Savva, A. Toshev, and E. Wijmans, “ObjectNav Revisited: On Evaluation of Embodied Agents Navigating to Objects,” in

arXiv:2006.13171, 2020.


[12] K. Yadav, S. K. Ramakrishnan, J. Turner, A. Gokaslan, O. Maksymets, R. Jain, R. Ramrakhya, A. X. Chang, A. Clegg, M. Savva, E. Undersander, D. S. Chaplot, and D. Batra, “Habitat challenge 2022.” https://aihabitat.org/challenge/2022/, 2022.


[13] D. Silver and J. Veness, “Monte-carlo planning in large pomdps,”

Advances in neural information processing systems, vol. 23, 2010.


[14] L. Holzherr, J. Forster, M. Breyer, J. Nieto, R. Siegwart, and J. J. ¨ Chung, “Efficient multi-scale pomdps for robotic object search and delivery,” in 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 6585–6591, IEEE, 2021.


[15] K. Zheng, Y. Sung, G. Konidaris, and S. Tellex, “Multi-resolution pomdp planning for multi-object search in 3d,” in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2022–2029, IEEE, 2021.


[16] K. Zheng, R. Chitnis, Y. Sung, G. Konidaris, and S. Tellex, “Towards optimal correlational object search,” in 2022 International Conference on Robotics and Automation (ICRA), pp. 7313–7319, IEEE, 2022.


[17] C. Wang, J. Cheng, W. Chi, T. Yan, and M. Q.-H. Meng, “Semanticaware informative path planning for efficient object search using mobile robot,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 51, no. 8, pp. 5230–5243, 2021.


[18] T. Choi and G. Cielniak, “Adaptive selection of informative path planning strategies via reinforcement learning,” in 2021 European Conference on Mobile Robots (ECMR), pp. 1–6, 2021.


[19] F. Zhou, H. Liu, H. Zhao, and L. Liang, “Long-term object search using incremental scene graph updating,” Robotica, vol. 41, no. 3, p. 962–975, 2023.


[20] A. C. Hernandez, E. Derner, C. Gomez, R. Barber, and R. Babuska, ˇ

“Efficient object search through probability-based viewpoint selection,” in 2020 IEEE/RSJ International Conference on Intelligent

Robots and Systems (IROS), pp. 6172–6179, 2020.


[21] A. Chikhalikar, A. A. Ravankar, J. V. S. Luces, S. A. Tafrishi, and

Y. Hirata, “An object-oriented navigation strategy for service robots

leveraging semantic information,” in 2023 IEEE/SICE International

Symposium on System Integration (SII), pp. 1–6, 2023.


[22] A. A. Ravankar, A. Ravankar, T. Emaru, and Y. Kobayashi, “A hybrid topological mapping and navigation method for large area robot mapping,” in 2017 56th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE), pp. 1104–1107, IEEE, 2017.


[23] M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, A. Y. Ng, et al., “ROS: an open-source Robot Operating System,” in ICRA workshop on open source software, vol. 3, p. 5, Kobe, Japan, 2009.


[24] P. Fankhauser and M. Hutter, “A Universal Grid Map Library: Implementation and Use Case for Rough Terrain Navigation,” in Robot Operating System (ROS) – The Complete Reference (Volume 1) (A. Koubaa, ed.), ch. 5, Springer, 2016.


[25] “Azure Kinect DK Sensor SDK.” https://docs.microsoft. com/en-us/azure/kinect-dk/sensor-sdk-download. Accessed: 2022-08-13.


[26] F. Endres, J. Hess, J. Sturm, D. Cremers, and W. Burgard, “3-d mapping with an rgb-d camera,” IEEE Transactions on Robotics, vol. 30, no. 1, pp. 177–187, 2014.


[27] M. Labbe and F. Michaud, “Rtab-map as an open-source lidar and ´ visual simultaneous localization and mapping library for large-scale and long-term online operation,” Journal of field robotics, vol. 36, no. 2, pp. 416–446, 2019.


[28] H. W. Kuhn, “The Hungarian method for the assignment problem,”


[29] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollar, and C. L. Zitnick, “Microsoft coco: Common objects in ´ context,” in Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pp. 740–755, Springer, 2014.