Articles about website spoofing, cybersecurity trends, and how to protect your customers from hackers.
One of the largest global digital payments platforms in the world used decoy user data and credentials to deceive fraudsters and trick them into exposing themselves. Allure Security’s unique decoy data capability entered realistic, but fake, data into form fields on phishing sites impersonating the platform’s brand. With this post, we’ll explain what decoy data is, how the platform used that data to prevent fraud, and how brands large and small might do the same.
Defenders can use decoy data to deceive scammers. MITRE defines a decoy user credential as “A credential created for the purpose of deceiving an adversary.” Decoy data in the context of online brand impersonation attacks consists of fake, but believable, identity information that a scammer attempts to steal from visitors to their phishing sites. In this particular case, the types of decoy data generated included, but were not limited to:
Decoy data must be convincing, not randomly generated gobbledy-gook. If scammers can distinguish between it and real data, it’s not deceptive and the benefits are lost.
Cybercriminals run businesses. They want to optimize their return on investment. With that in mind, part of winning the cybersecurity battle is making it more expensive for adversaries to succeed. Therein lies the value of deceptive decoy data.
Allure Security uses decoy data to drive up the cost of a scam. Filling a scammer’s bucket with data they can’t use slows them down and makes them do more work – that is, it increases their costs. The best response to an online brand impersonation attack is multi-pronged with the deployment of decoy data being one prong along with block-listing and takedown completing the triumvirate.
When a fraudster’s campaign collects more data than usual (because their repository has been tainted with decoy data), they need to decide how to proceed.
The attacker will ask themselves questions such as:
Regardless of what the adversary decides, their costs have gone up. The attacker may decide to shut down the scam considering it a loss and then move on to another target.
Or, if the fraudster decides to go through with testing the validity of the credentials, as they did in the digital payments platform’s case, they will expose themselves when they make use of the decoy data.
The digital payments platform we worked with on this project is one of the largest in the world. Therefore, fraudsters frequently impersonate the brand online to steal people’s account credentials and relevant personally identifiable information.
In this case, we’d found a group of scam websites that impersonated the payment platform’s brand and offered its service as a payment option to visitors. The adversaries in this case wanted to steal PII, account credentials and in some cases payment card information from victims.
The payments brand elected to give the scammers what they wanted by automating the generation and entry of decoy data into the phishing sites’ form fields. The information entered also included credentials for user accounts. The accounts in this case were real, though completely locked down. The payment platform had interest in tracking the attackers’ activities once they’d stolen what they believed were a victim’s credentials.
The decoy data also included working phone numbers (facilitated via “burner” phones) and information for real credit cards that we set up to limit transactions to $0.00.
Not only did the scammers need to be convinced of the authenticity of the decoy data, the data needed to be entered in the same context it would have been by a genuine human.
Fraudsters detect bot behavior by observing the pace at which data is entered, the time of day it’s entered, the geolocation of the system entering the data, and more. For example, decoy data such as a Miami address entered into a scam website needs to originate from an IP address associated with Miami, not Boston.
Attackers will also evaluate the traffic visiting their site to inspect what’s called the user agent string associated with the traffic. The user agent string provides information about the device and web browser used to view the site. If a visitor’s information doesn’t align with what’s expected from common consumer devices, the attacker will filter that traffic and/or know not to trust it.
By monitoring the decoy payment platform accounts, phone numbers, and credit card accounts, we could observe more of the attackers’ tactics, techniques and procedures (TTPs).
For example, the attackers automated their validation procedure. Within minutes of stuffing our realistic bogus data into their website, they nearly instantly tested whether the payment platform accounts were valid. This alerted the payments platform that the adversary was interacting with their web application so that they could prevent any fraudulent activity from occurring and gather information about the attacker and their methods.
In addition, just as retailers do when processing a credit card over the phone, the attackers also executed a zero-dollar transaction against the credit cards to ensure their authenticity. These zero-dollar transactions do not appear on a card holder’s account and go unnoticed by a user.
In this instance, it appears the attacker tested the cards using a small local business in New York City. We hypothesize that the business was a small mom and pop shop with a credit card payment device the attacker had compromised.
Within minutes, the attacker then attempted to buy electronics from a German retailer, but the transaction failed since it was above the payment threshold set for the decoy credit card. They were foiled in their tracks.
To counter online brand impersonation attacks and the scammers behind them, defenders need automation to act quickly. Automating an offensive strategy to deceive the deceiver requires the generation of decoy data that is believable and passes a number of automated tests performed by attackers.
Our recent collaboration with a top digital payments platform provides more proof that this is not only feasible, but actually useful. Gathering “fingerprints” unwittingly left behind by an attacker by interacting with the decoy data provides information about them and their behavior.
Posted by admin