paint-brush
Security concern is a top barrier to AI implementationby@modzy
101 reads

Security concern is a top barrier to AI implementation

by ModzyAugust 9th, 2021
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Modzy is charting the path forward for a new level of AI performance and ensuring AI model security. Our patented adversarial defense solution ensures your models are robust against attacks, scans data, maintains model integrity against poisoned data, and keeps models safe against stealing attempts. Additionally, our model watermarking solutions allow you to validate provenance information for models running in production. The company is offering a software platform for organizations and developers to responsibly deploy, monitor, and get value from AI - at scale.

Company Mentioned

Mention Thumbnail
featured image - Security concern is a top barrier to AI implementation
Modzy HackerNoon profile picture

AI is brittle. It can be fooled. Threats to accuracy and performance of your models are lurking in unsuspected parts of your pipeline.

AI used in critical business systems must be secure against attempts to generate misinformation or degrade model performance. Modzy is charting the path forward for a new level of AI performance and ensuring AI model security. Our patented adversarial defense solution ensures your models are robust against attacks, scans data, maintains model integrity against poisoned data, and keeps models safe against stealing attempts. Additionally, our model watermarking solutions allow you to validate provenance information for models running in production.

Security is often cited as a top barrier to AI implementation. Yet, many organizations haven’t adopted a comprehensive approach for securing AI in production environments, or addressing nuances related to AI model security. Most production systems don’t even have a process to check or validate the source information for models running.

Adversarial Solutions

Keeping Models Robust

  • Allows models to learn and make decisions in a manner similar to that of humans
  • Allows models to make predictions in unfamiliar environments and under the possibility of adversarial attacks
  • Enhances the well-known backpropagation algorithm commonly used across industry to train deep learning models
  • Models are trained to rely on a holistic set of features learned from the input when making predictions

Data Scanning Solution

  • Identify adversarial inputs during both training and inference across different datasets and domains
  • First method that utilizes the property where attacks can transfer between different models in both simulated and production environments to detect the adversarial inputs in the datasets before they are fed to the deep learning model