The safest thing you can do is also the problem
There is a particular irony in the malvertising problem that I think gets underappreciated: the attack targets users at the exact moment they are doing something responsible.
They are not clicking a suspicious email. They are not opening an attachment from someone they do not recognize. They are typing the name of a product their company uses, Adobe Acrobat, say, or PuTTY, or whatever video conferencing tool their IT team approved last quarter, into a search engine. They are looking at the results. They are clicking the one at the top. That is what we trained them to do, more or less, and the attack lives in the gap between that training and what actually happens next.
What happens next, in a growing number of cases, is that the top result is a paid ad purchased by someone who is not Adobe, and the download page it leads to is a replica good enough to pass a quick glance from someone who has a meeting in ten minutes and just needs the installer. The file installs. It does what the user expected. It also drops an infostealer that harvests every saved password in their browser, every active session cookie, and every authentication token their machine has cached. That bundle gets packaged, uploaded, and listed for sale on a dark web market within hours. And the user has no idea anything happened.
What we are seeing
Infostealer compromises and stealer logs cross my desk with alarming regularity, but what stands out is not a particular malware family or a single novel capability. It is how mature the operating model has become. In some cases, the resale layer now looks less like a pile of raw logs and more like a searchable service. We see Telegram-connected ecosystems such as Daisy Cloud and Moon Cloud that ingest exfiltrated stealer data and let buyers query for specific datasets rather than manually sort through wholesale dumps. That has created downstream industries around the initial infection. Malware steals the data, Telegram and lightweight services move it, dark web and carding markets price it, brokers filter it, and other actors turn it into account takeover, business email compromise, credential stuffing, fraud, or the first step toward ransomware. The supply chain is faster, more modular, more resilient, and more commercially mature than it was even a year ago.
Malvertising campaigns are one of the most efficient entry points into that pipeline. The pattern that concerns me most is not the sophistication of any individual campaign. It is the economics. Malvertising has made initial access cheap and self-service. An attacker buys an ad, stands up a convincing page, and waits for people to do what people do, which is search for things. The conversion rates are not spectacular, but they do not need to be. At the scale of global search traffic, even a small fraction of clicks produces a meaningful volume of compromised credentials and session tokens that can be monetized downstream. Advertising platforms also make that traffic easier to target than many people realize. Fraudsters can often focus campaigns on specific device types, operating systems, version ranges, and network conditions so the weaponized ad is only shown to the users and environments most likely to produce a successful infection.
The campaigns we track as part of our dark web monitoring use rapid domain churn, geographic and device filtering to evade security scanners, and redirect chains that show different content depending on who is looking. A security researcher examining the ad sees a benign landing page. An employee in the target geography on a Windows machine sees the credential harvesting form. This is not new, but the tooling has matured to the point where it is operationally routine rather than technically impressive.
Why "don't click ads" is not a strategy
The instinct is to solve this with training. Teach people not to click sponsored results. Promote bookmarks for sensitive logins. Publish official download paths on the company intranet.
Those are fine hygiene measures. They are also aspirational in the way that telling people to floss twice a day is aspirational. Some percentage of employees will internalize the guidance. The rest will continue doing what they have always done, because searching for something and clicking the top result is a behavior that predates security awareness training by about twenty years and is reinforced by every other interaction they have with the internet.
The more productive question is what happens after the click. And the answer, in most organizations, is: not much, for a while. The infostealer runs. The logs get exfiltrated. The credentials and session tokens appear on a market. And then, days or weeks later, someone uses those credentials to log in as the employee, bypass MFA using a stolen active session token, and begin the actual attack, whether that is account takeover, business email compromise, or the first step of a ransomware deployment.
The gap between infection and exploitation is the window where defenders actually have leverage. Shortening session lifetimes, rotating refresh tokens, requiring re-authentication for sensitive actions, and monitoring for impossible-travel patterns all reduce the shelf life of stolen credentials. The goal is to make the thing the attacker purchased on the dark web expire before they can use it.
The brand dimension
There is a second angle to this that matters to anyone responsible for protecting a brand, and it is the one I think about most in the context of our work.
The malvertising page impersonating your product is not just an endpoint security problem. It is a brand impersonation event. Your logo is on the page. Your product name is in the ad copy. Your customers or employees are the targets. The page exists because your brand carries the trust that makes the click happen. If the brand were not recognizable, the ad would not convert.
That means the monitoring question is not just “are our employees clicking bad things” but “is someone buying ads using our brand to build credential harvesting pages.” Those are lookalike domains with your name in them, promoted through the same advertising platforms your marketing team uses. Detecting them requires the same content-based analysis that detects fake storefronts and impersonation sites anywhere else, visual similarity, brand asset misuse, credential harvesting form detection, applied to the ad ecosystem.
Wrapping Up
Malvertising exploits a truth that no amount of training will fully address: people trust what appears at the top of a search result. The attack does not feel like an attack. It feels self-initiated, which is precisely why it works. For defenders, the useful response is not to fight the behavior but to reduce the value of what gets stolen: shorter sessions, faster detection of stealer-log exposure, and brand monitoring that catches the impersonation infrastructure before employees or customers encounter it. The search bar is an untrusted channel now. That is an uncomfortable adjustment, but it is the correct one.
What to read next
- How Search Ads Became a Phishing Channel The full anatomy of malvertising, from keyword bidding to credential harvesting pages to forced redirects.
- Bing Malvertising: The Enterprise Blind Spot Why Bing’s integration into Windows search and Copilot creates a unique exposure for enterprise environments, and the Rhysida ransomware campaigns that exploit it.
- The $500 Purchase That Starts Every Ransomware Attack How stolen credentials from infostealers end up on dark web markets, and the 23-to-36-day window between listing and ransomware deployment.





