Authors: (1) Maria Rigaki, Faculty of Electrical Engineering, Czech Technical University in Prague, Czech Republic and maria.rigaki@fel.cvut.cz; (2) Sebastian Garcia, Faculty of Electrical Engineering, Czech Technical University in Prague, Czech Republic and sebastian.garcia@agents.fel.cvut.cz. Table of Links Abstract & Introduction Threat Model Background and Related Work Methodology Experiments Setup Results Discussion Conclusion, Acknowledgments, and References Appendix 2 Threat Model The threat model for this work assumes an attacker that only has black-box access to a target (classifier or AV) during the inference phase and can submit binary files for static scanning. The target provides binary labels (0 if benign, 1 if malicious). The attacker has no or limited information about the target architecture and training process, and they aim to evade it by modifying the malware in a functionality-preserving manner. In the case of classifiers, the attacker may have some knowledge of the extracted features, but this is not the case for antivirus systems. Some knowledge of the training data distribution is assumed. However, it may only be partially necessary. Finally, the attacker aims to minimize the interaction with the target by submitting as few queries as possible. This paper is available on arxiv under CC BY-NC-SA 4.0 DEED license. Authors: (1) Maria Rigaki, Faculty of Electrical Engineering, Czech Technical University in Prague, Czech Republic and maria.rigaki@fel.cvut.cz; (2) Sebastian Garcia, Faculty of Electrical Engineering, Czech Technical University in Prague, Czech Republic and sebastian.garcia@agents.fel.cvut.cz. Authors: Authors: (1) Maria Rigaki, Faculty of Electrical Engineering, Czech Technical University in Prague, Czech Republic and maria.rigaki@fel.cvut.cz; (2) Sebastian Garcia, Faculty of Electrical Engineering, Czech Technical University in Prague, Czech Republic and sebastian.garcia@agents.fel.cvut.cz. Table of Links Abstract & Introduction Abstract & Introduction Threat Model Threat Model Background and Related Work Background and Related Work Methodology Methodology Experiments Setup Experiments Setup Results Results Discussion Discussion Conclusion, Acknowledgments, and References Conclusion, Acknowledgments, and References Appendix Appendix 2 Threat Model The threat model for this work assumes an attacker that only has black-box access to a target (classifier or AV) during the inference phase and can submit binary files for static scanning. The target provides binary labels (0 if benign, 1 if malicious). The attacker has no or limited information about the target architecture and training process, and they aim to evade it by modifying the malware in a functionality-preserving manner. In the case of classifiers, the attacker may have some knowledge of the extracted features, but this is not the case for antivirus systems. Some knowledge of the training data distribution is assumed. However, it may only be partially necessary. Finally, the attacker aims to minimize the interaction with the target by submitting as few queries as possible. This paper is available on arxiv under CC BY-NC-SA 4.0 DEED license. This paper is available on arxiv under CC BY-NC-SA 4.0 DEED license. available on arxiv