Andrej Černý: CS student from Czechoslovakia. I study the intersection between malware and media.
Scam advertisements across major media outlets originate from DoubleClick and Xandr/AppNexus
By now, all of us have been affected by COVID-19 through stay-at-home orders, business closures and cancelled events. This year, I wasn’t able to visit Prague over spring break like I had hoped to thanks to the shutdowns. Fortunately, the extra free-time allowed me to continue my work studying malicious ads across the Internet — and it’s a good thing I did.
This time around, I had my eyes peeled for something specific: advertisers have already been caught peddling fake coronavirus “cures” across Instagram and Facebook, alongside storefronts that offer commodities like face masks or hand sanitizer at a grossly marked up price. In many cases, users who click-through and pay up will receive nothing in return.
Facebook/Instagram have responded appropriately by banning these scammers and screening for ads that might be related to COVID-19. Unfortunately, other media outlets aren’t doing the same thing, and fake ads are spreading across major news sites that receive millions of visitors every month.
In this article, I’ll share some of the things I found, beginning with Disney’s ESPN.com. But first, let’s talk methods.
In a previous article, I explained how I scan for malicious ads using a combination of the Pyshark packet scanner and other open source tools. In this case — since I was looking for malicious source destinations instead of malicious code running in the user’s browser — I had to slightly alter my strategy.
Using HTTRACK and Wget, I wrote a crawler to compile a list of malicious URLs from trust rating authorities like Scamdoc. This data is used to scrape tags, links and keywords from the web sessions scanned by Pyshark. In the end, I am alerted to the most suspicious incidents, and attempt to replicate them. If I can do so, that is when I know I have something legitimate.
ESPN.com — a major online destination for sports fans — is owned by Disney, and both companies use the same AdTech running on top of Google’s Ad Manager, which has been found to serve fraudulent ads in the past.
When I scanned the site, it wasn’t too long before I found something undoubtedly malicious. The following ad purports to offer medical gloves and hand sanitizer:
Clicking on the link will lead users to a domain advertising a number of sanitation commodities at a marked-up price:
The fraudulence of this domain is already obvious based on the fact that hand sanitizer is not even available on Amazon.com, but further verification can be found by searching for the domain (homeinshop.com) and checking user reviews. Based on WHOIS, the site was registered one month ago, suggesting it was probably created for the sole purpose of exploiting COVID-19 related fears.
According to the URL chain, the origin of this ad traces — as expected — to Google’s Ad Manager (below), through ESPN’s content delivery network (above). We’ll talk about frequency later.
As the owner of ESPN.com and the second largest media company in the world, Disney should be held accountable for allowing this ad to run. Over the past few months, publishers and ad networks have worked to proactively block potentially fraudulent domains, and keywords like “coronavirus”. At Disney, it seems that very little stands between malicious actors and their targets.
Thanks to its sheer size, Disney was the most prominent media outlet where I could find fraudulent ads spreading in the wild. However, just by going down the list of popular news and entertainment websites, I was able to find many more. In this case, I want to highlight ad networks instead of specific domains.
In the past, I wrote about AT&T’s ad network Xandr (formerly AppNexus), and the unusually high volume of malware-carrying ads that seem to appear on its partner sites. I fully expected to see more activity from them, and I wasn’t disappointed. I was a little surprised to see that it delivered the exact same ad from before multiple times.
Here’s one example from Vox.com (halfway down):
And here’s another one from Today.com (near the top):
In the end, my scanner was flagged for about 0.28% of the ads I loaded over ~15000 sessions, and that’s not counting the ones it missed. When I went to manually replicate my results, I noticed a lot of suspicious content which fell through the cracks. Who knows how big this problem really is —I say it’s already way too big.
When I talk about my hobby with friends, the reaction I’m often met with is indifference. “Surely anyone would know not to click that,” they say. And that might be true for most of the people in my generation — but what about the elderly, and what about younger kids with access to their parents’ credit card?
The truth is, this tactic is obviously profitable, because the site covered in this article has spent a lot of money to appear across multiple domains, and it’s been running for a month straight. We can assume that in this time of crisis, desperate people are clicking these ads, losing their money, and possibly engaging in unsafe practices when they end up with fake products/test kits.
At least some people are paying attention. For instance, the U.K’s Advertising Standards Authority (ASA) has set up a website where users can report fradulent advertisements — but so far, only 99 people have used it, and this problem is impacting way more than 99 people. Why can’t we expect large publishers like Disney and networks like Xandr to deal with this problem themselves?
In the middle of a public health crisis, the real impact of fake/malicious advertising becomes painfully obvious. It’s not just an annoyance or a nuisance: it endangers public health and individual lives. People who depend on the Internet are not all digitally literate, and they shouldn’t have to be. Large brands who use visitors to make a profit should lift a pinky to protect their safety — and if they don’t, I’m going to call them out.
Create your free account to unlock your custom reading experience.