Authors:
(1) Maria Rigaki, Faculty of Electrical Engineering, Czech Technical University in Prague, Czech Republic and [email protected];
(2) Sebastian Garcia, Faculty of Electrical Engineering, Czech Technical University in Prague, Czech Republic and [email protected].
Conclusion, Acknowledgments, and References
The threat model for this work assumes an attacker that only has black-box access to a target (classifier or AV) during the inference phase and can submit binary files for static scanning. The target provides binary labels (0 if benign, 1 if malicious). The attacker has no or limited information about the target architecture and training process, and they aim to evade it by modifying the malware in a functionality-preserving manner. In the case of classifiers, the attacker may have some knowledge of the extracted features, but this is not the case for antivirus systems. Some knowledge of the training data distribution is assumed. However, it may only be partially necessary. Finally, the attacker aims to minimize the interaction with the target by submitting as few queries as possible.
This paper is available on arxiv under CC BY-NC-SA 4.0 DEED license.