Beyond the Perimeter: Securing AI for the Quantum Era

Written by viceasytiger | Published 2026/02/04
Tech Story Tags: machine-learning | machine-learning-research | quantum-computing | ai-and-quantum-computing | founder-interview | ai-security | ml-security | privacy-preserving-ai

TLDRMost AI systems don’t fail because the models are bad — they fail because the systems around them are fragile, insecure, and poorly governed. In this interview, Jeremy Samuelson, EVP of AI & Innovation at Integrated Quantum Technologies, explains why the real ceiling of applied AI is set by architecture and data movement, not model accuracy. He also introduces VEIL, a new security architecture that removes sensitive data from the ML pipeline entirely, making AI systems breach-resilient and inherently quantum-safe by design.via the TL;DR App

For the last few years, the AI industry has been obsessed with one question: How do we build better models?

Bigger models. Smarter models. More parameters. Better benchmarks.

But inside banks, credit bureaus, and global enterprises, AI rarely fails because the model isn’t good enough. It fails because the systems around it are fragile, the data is poorly governed, and the security assumptions are outdated.

Jeremy Samuelson, Executive VP, Artificial Intelligence and Innovation at Integrated Quantum Technologies, has spent more than 15 years building AI systems in some of the most unforgiving environments possible: fraud detection at Mastercard and Equifax, large-scale optimization at a Coca-Cola subsidiary, etc.

In this interview, Jeremy explains why the real ceiling of applied AI is set by architecture and data movement, rather than model accuracy. He also introduces VEIL (Vector Encoded Information Layer) – a breakthrough in informationally compressive anonymization that allows models to run on data that is mathematically non-invertible and inherently quantum-resilient.

Enjoy the read!

The Deployment Gap: Why AI POCs Fail

You’ve been building real-world AI systems for over 15 years, long before today’s hype cycles. When you look back, what do you think we misunderstood most about what actually makes AI work in production?

Jeremy Samuelson (JS): When I started, there really was no established standard for deploying Machine Learning systems. Honestly, with the first models we built, we were just happy if they ran in production at all. Everything was scrappy. The IT departments we worked with were struggling too: how do we do this? How do we get these Machine Learning models deployed at all?

Early on, we understood very little about the deployment lifecycle, and even less about the security risks. What surprises me today is that many organizations still struggle to get a model past the proof-of-concept stage. They’re very good at hiring mathematicians and operations researchers who build impressive models, but that’s usually where things stop.

I still have a lot of clients calling me because they have a POC and simply don’t know how to deploy it, especially at enterprise scale. And even organizations that do deploy models often don’t fully understand how vulnerable they are.

Machine Learning deployments create an entirely new attack surface, and most traditional cybersecurity teams are completely unprepared for it.

You’ve worked on fraud detection at Mastercard and Equifax, and large-scale optimization at a Coca-Cola subsidiary. Now you’re focused on AI governance and security. What pattern keeps repeating across all these environments that the mainstream AI narrative still refuses to acknowledge?

JS: In terms of repeating patterns, the one thing I see over and over again from Mastercard to Coca-Cola to Equifax is just the struggle to get something out of the proof-of-concept phase and into deployment at all.

But the deeper one is this: most organizations don’t actually have a coherent data governance model.

Equifax was very good about that. They had a very clear classification tier structure – any given data you’re dealing with, what class does this fall into, and what are the rules about what you can do with a data class? Other organizations, even very big organizations I've worked for, haven't even gotten there yet. They aren't even thinking about that.

I would say that’s maybe the most consistent pattern: the complete lack of a pattern as far as that goes.

So would it be fair to say that most teams obsess over models – bigger models, better accuracy – but the real ceiling is set by architecture, data movement, governance, and operational risk?

JS: That’s exactly right. It’s definitely the case that if you go into an organization and talk to their data science teams, their MLOps teams, or even the project managers who own these AI initiatives, many of them are very focused on the biggest and best, latest and greatest model architectures. I’ve even spoken with project managers who almost treat the size of a model as a metric of success. They get very excited about having big models deployed in their systems or projects, so there’s a lot of obsession with complexity – big, complex models and all the metrics around predictive performance.

And when it comes to security, no one’s really thinking about it. No one has metrics around “Are we securing this deployment?” They’re often not even aware there’s a risk.

If you bring up security, they usually put all their faith in firewalls and perimeter security. So if you say, “There are real privacy or security risks here,” the answer is almost always, “Oh no, it’s fine, this is all happening within our secure environment.”

They don’t really think past the fact that a secure environment is secure, until it’s not. We hear about data breaches in the news all the time.

So once someone is in, what then? How are you protecting the sensitive data being sent to your model APIs? Nobody really wants to think that hard about that additional layer of security.

Breaking the Privacy Paradox in AI

This feels like a good moment to talk about your recent work on the development of the Vector Encoded Information Layer – VEIL. How did your experience with high-stakes fraud detection lead to this invention?

JS: I came up with the idea back when I was at Equifax. I was the principal AI scientist in the Digital Identity Engineering vertical, responsible for modeling identity fraud, especially for new account openings.

So let’s say you’re a bank. A consumer applies for an account – online or in a brick-and-mortar branch. The bank has to “know your customer,” so it calls our API, which runs three models: one for synthetic identity fraud, one for third-party fraud, and one for first-party fraud.

First-party fraud is surprisingly common. That’s when I open a card in my own name, max it out, and then 30 days later say, “Oh gosh, I don’t remember opening this account. I must be a victim of identity fraud.”

So we had models for each of these situations. Each model returns a score from zero to 100.

To do this well, you have to look at extremely sensitive information. More than in any other fraud system I’d worked on, it hit me how invasive this is.

We’re trying to stop bad actors, but if we’re not careful, we’re just creating another vulnerability that leads to more breaches and more stolen identities. That would be pretty ironic.

Around the same time, we hit another problem: deployment. Equifax is a multinational organization serving financial institutions worldwide. The engineering and compliance teams told us we’d need regionally pinned deployments, meaning the entire MLOps pipeline would have to be duplicated in every region: the US, Canada, the EU, Australia, and so on. Multiple instances of the model deployed, multiple pipelines, everything duplicated everywhere, which is kind of a mess to roll out.

So I started asking: how do we do this more efficiently?

Regulations like GDPR, CCPA, and CPRA make it clear that PII must remain in its home jurisdiction. You can still have regional databases, but I wanted one model serving all regions.

The problem is that once data leaves its jurisdiction, it must be anonymized or encrypted and can never be decrypted outside that boundary. You can’t encrypt data in Australia, send it to the US, decrypt it there, and run a model, even in a “secure” environment.

So I realized we needed a way to put data into a protected form and keep it that way forever – never decrypting it, never re-identifying it – and still let the model work.

I first looked at Homomorphic Encryption and Differential Privacy.

Homomorphic Encryption looks great, but in practice, data can balloon by a factor of a thousand. Latency and costs explode, and performance drops: a model that detects 90% of fraud might fall to 80% or 75%.

Differential Privacy primarily protects against model inversion attacks, but not against in-flight interception. It also adds noise and degrades performance.

So neither approach worked.

I spent two and a half years experimenting and eventually arrived at VEIL — the Vector Encoded Information Layer.

VEIL lives next to the source database. Sensitive data is encoded there, and only the encoding ever leaves. The model works directly on that representation.

You still trust the source environment, but instead of expanding it, you keep it small and tightly controlled. Only veiled encodings leave.

If your data is on-prem but you want cloud compute, now you can – without exposing the data.

We’ve proven mathematically that these encodings are not invertible. None of your training or inference infrastructure ever sees real data. There’s nothing to steal.

And our security posture is different. We don’t say: ”We trust all of our perimeter security, we trust our firewalls, we trust all of our controls, so it's fine that we're just moving data around inside there." That's absolutely not how we approach it. We instead say: "Attackers are very clever. Attackers are very resourceful. New exploits are found all the time that let attackers get into environments." So you should go ahead and assume, at some point, an attacker will get into one of these environments. At some point, these environments will be compromised by a bad actor.

And what we've done is we've made it so that that ultimately doesn't matter. If data is intercepted, it’s not real data.

If the environment is compromised, it never sees real data. If someone runs gradient or model inversion attacks, the model never sees real training data; it only sees veiled encodings.

That’s essentially what the VEIL architecture is, and how it’s different from everything else.

So that’s basically a breakthrough, and it’s like a next-gen security stage. Can we call it so?

JS: Yes, absolutely. We’ve tested this across classification and regression. It applies to essentially any supervised ML problem, and it does not degrade model performance.

If a model detects 90% of fraud, it can still detect at least that much with veiled encodings. That was the most important thing for me.

We’re also not using encryption. The data gets smaller, not bigger. We use ICA — informationally compressive anonymization. We intentionally destroy information that attackers want. We’ve achieved 96% to 99% compression while preserving predictive utility.

That means lower latency, lower cost, intact SLAs, and less complexity – without the usual tradeoffs.

And because there’s no encryption to break, and the information is already gone, this is also quantum-resilient. You can’t extract information that isn’t there.

Is VEIL already running in production anywhere, or is it just experimental?

JS: It started as a proof of concept. I showed it to the advisory board at Integrated Quantum Technologies and said, "Hey, I have this proof of concept."

Since then, we’ve raised capital and assembled a team. Everybody on the team is somebody I've worked with before and highly trust. I know that they're very capable. And this team has actually spent many months hardening and productionizing this system and engineering this entire framework into an enterprise-ready solution.

We’re now looking for POC customers and are already in talks with several organizations that use highly sensitive data.

We’re also focused on making this easy to adopt. We’re about to release a Snowflake native app. We’re close to AWS Marketplace, with Azure and GCP to follow.

The bigger goal is not just privacy-preserving ML, but making deployment itself easy. I’m really hoping to solve that problem as well. We’re putting a lot of thought into the overall user experience to make this as easy to use as possible, so that for organizations that have historically struggled to get their ML models past the POC stage and into production, this system makes that step straightforward. The idea is that in a matter of minutes, you can be up and running with an enterprise-grade deployment.

I think many teams will be happy just to finally be able to get their model deployed at all, and for them, the fact that it’s also privacy-preserving will almost feel like a bonus.

Ideally, privacy-preserving ML deployments should actually become easier than ML deployments have historically been.

The Road Ahead

We’ve talked a lot about AI, security, and privacy, but zooming out a bit, how do you see the bigger picture evolving? Between the rise of AI and the looming quantum era, what worries you most about where this is all heading?

JS: I do think the quantum threat is getting closer. Some people are skeptical. Maybe they’re right. But what if they’re wrong? There’s an enormous national investment going into this.

But there’s another risk people don’t talk about. As AI displaces IT professionals, you get a large group of highly skilled people who understand enterprise systems, security gaps, and how to exploit them.

History is clear: when legitimate markets can’t absorb skilled labor, gray and black markets do.

So attackers will get much more sophisticated. We’ll see automated attacks, DevOps pipelines for cybercrime, A/B-tested scams—things we haven’t seen at scale before.

If this trend continues, cybercrime is going to start looking a lot like a professional software industry.


As models get more powerful, the cost of getting the surrounding systems wrong gets higher. At the same time, the threat landscape is changing: quantum computing is no longer a purely theoretical concern, and AI-driven automation is reshaping not just how software is built, but who builds it and who attacks it.

Whether VEIL itself becomes the dominant answer or not, the deeper message of this story is harder to ignore: the future of AI will be decided less by how smart our models are and more by how seriously we take the systems and the adversaries around them.




Written by viceasytiger | Storyteller & interviewer covering AI, web3, marketing, and Ukraine.
Published by HackerNoon on 2026/02/04