Perhaps you’ve noticed that photos appearing on the news depicting war are being questioned much more frequently than before. In many instances, they’ve been identified as fakes by expert inspection, the consensus of a community of people, or tools such as fake photodetectors that analyze them.
Whereas 2023 may be looked upon as the year where it became standard practice to question whether photos and, in many cases, videos and documents are real, 2024 is poised to be the year where it becomes standard practice to run every questionable photo, video, and document through AI systems to determine their authenticity.
After years of experts warning about a potential scenario where deepfakes will have the power to skew reality, it only makes sense to ask a couple of questions:
With the explosion of generative AI text tools such as ChatGPT in 2023, it was easy to lose track of
At the same time, many similar frameworks emerged from several vendors and open source, including
Many of these frameworks have already undergone significant improvements in 2023 alone, and the outcome is very clear. The ability to generate fake or AI-altered photos is now in the hands of the masses, and the ability to generate fake videos and documents is also here and rapidly improving.
So now that the proverbial cat has been let out of the bag, there are a number of efforts circulating to help resolve the issue of fakes, ranging from government policies and controls to standards to help identify fake photos, videos, and documents, to anti-AI frameworks you can begin using today.
While government policies and controls may represent a valiant effort to save us from this impending doom, open-source frameworks tend to elude corporate-enforced controls, because they are not owned by any particular organization.
Moreover, the effort would require governments from virtually all nations to act in tandem to help solve the problem that effectively has spread worldwide.
Standards to identify fake photos are also a valiant effort to get the corporations developing generative AI to digitally watermark the photos and videos created, so, they can be identified as fakes.
However, these corporations amount to the good actors of the potential information warfare that can ensue. What about the bad actors leveraging open source or finding ways to circumvent or remove the watermarking?
The answer, if you haven’t guessed, is there is no way to get bad actors to play nice.
That leaves anti-AI frameworks, or fake detectors, as the best available and most proactive tool, particularly for businesses who need to protect important data against the onslaught of fakes.
Tools that identify what is real and what is fake are the best immediate defense, albeit there are a few caveats everyone should be aware of.
Static tools will become obsolete very fast as a result of the advancement of generative AI technology. There needs to be constant improvement on the side of the detectors to keep up with the constantly improving fakes.
While some may argue that this is a losing battle, it is the only battle for those who want to guard against fraud and potential reputation damage.
The other caveat is that a detection framework needs to be analyzed rapidly and at scale to counter incoming threats. Thanks to GPU technology and cloud scale, this is all possible today.
Ultimately, while a few choices are floating around how to thwart fakes, the best defense is a tangible strategy rather than hoping the problem goes away or that others resolve the problem for you.
Those who take early action will benefit in the long term. Luckily, there are proactive steps you can take to protect yourself and your business against the growing threat of fakes.