Too Long; Didn't Read
There are [four types of attacks] that ML models can suffer. An adversary steals a copy of a remotely deployed machine learning model, given oracle prediction access. Extraction attacks aim to extract as much information as possible and with the set of inputs and outputs train a model called substitute model. Extract model is hard**, the attacker needs a huge compute capacity to re-training the new model with accuracy and fidelity, and substitute model is equivalen to training a model from the ground up.