Let’s begin with a simple introduction into the world of adversarial inputs. These are inputs into a <a href="https://hackernoon.com/tagged/machine-learning" target="_blank">machine learning</a> classifier that have been shrewdly perturbed in such a way that these changes are near damn invisible to the naked eye but can fool the machine learning classifier into predicting either a arbitrary wrong class (Un-targeted) or a specific wrong class (targeted).
Companies Mentioned
Vinay Prabhu
@vinay-prabhu
PhD, Carnegie Mellon University
Chief Scientist, UnifyID