All online publishers regardless of their size and niche, will be visited by some form of invalid traffic over time. While not all bots are bad, invalid traffic sources can damage a website’s reputation and lead to suspension with Google, Amazon, and other partners.
Invalid traffic is a threat to all publishers looking to generate ad revenue by selling space on their websites.
This article delves into the different types of invalid ad traffic, what causes it, and how publishers can prevent invalid traffic to get maximum ROI on advertiser’s costs and protect themselves from fraud.
What is Invalid Traffic?
Invalid traffic is the artificial inflation of clicks and impressions on a website that does not come from a genuine user with real interest in the content.
Invalid traffic occurs both accidentally and with fraudulent intent. According to Google it can ‘include accidental clicks caused by intrusive ad implementations, fraudulent clicking by competing advertisers, advertising botnets and more.’
From an advertiser’s perspective, the clicks don’t lead to genuine revenue, making them essentially worthless. With research showing a loss of $1.27B to publishers a year due to ad fraud and invalid traffic, publishers need to know how to identify and mitigate invalid traffic.
Typically invalid traffic includes:
- Impressions and clicks generated by publishers themselves on their own websites and ads
- The adoption of automated tools by publishers to increase impressions
- Bot traffic used to spam websites in order to steal user data
The Media Rating Council, alongside IAB, classify two types of invalid traffic, general invalid traffic (GIVT) and sophisticated invalid traffic (SIVT).
General Invalid Traffic-GIVT
General invalid traffic runs in the background and scans websites for information. It does not mimic human behavior and is not sent with the intent of fraud, deeming it the most acceptable and least risky form of invalid traffic.
Some types of general invalid traffic include:
- Bots, spiders, and other crawlers that originate from known data centers and search engines
- Data and brand safety bots
- Traffic from unknown browsers
- Analytics crawlers
- Invalid Ad Placements
Although GIVT is non-human traffic, it generally serves a purpose in the digital ecosystem, whether to measure or improve based on the data it extracts.
Perhaps most important to note is that GIVT doesn’t artificially inflate ad clicks or impressions.
Sophisticated Invalid Traffic- SIVT
Sophisticated Invalid Traffic (SIVT) is traffic created with malicious intent. Of the two types of invalid traffic, sophisticated invalid traffic is far more technically involved and consequently harder to identify. This type of invalid traffic is difficult to detect because malicious actors use botnets to mimic human behavior. SIVT generally requires advanced analytics and concerted human intervention to detect the fraudulent ad traffic and ultimately prevent it from artificially inflating clicks.
Sophisticated invalid traffic also includes traffic that does not meet the criteria for ad quality, ad serving, and ad completeness.
SIVT includes:
- Bot traffic that interacts with web pages and digital ads without declaring themselves as non-human
- Illegal substitute traffic
- Malware
- Cookie Stuffing
- Bots designed to manipulate data and statistics
- Hacked user devices
- False location data
3 Major Causes of Unintentional Invalid Traffic
Publishers who have been notified by Google that their website is being hit by invalid traffic fall into one of two categories:
- The source of the invalid traffic is a mystery
- The publisher is intentionally doing something to create the impressions generated
For those who fall into the first category, they are most likely entirely in the dark as to the source of this traffic. Here are three major causes of unintentional invalid traffic for publishers.
Expired and Redirected Domains
A common SEO practice is buying an expired domain. Publishers then either redirect their website to the expired domain or build a new one to fit the domain industry niche.
The benefits of this practice include:
- Taking over a domain that still has good authority
- The site may still have good backlinks
- The domain name is in a similar industry to the publisher’s
- The domain has an .edu or .org address, giving the publisher a higher level of authority
When publishers purchase an expired domain, the idea is that, once the redirect is set up, they will gain the benefits of existing backlinks, domain authority and rankings.
The problem here however, is that while publishers may get transferred impressions on the new site, they don’t know where this traffic is coming from and why.
When forwarding traffic from expired domains to new sites, publishers may be sending themselves:
- Unwanted bots and crawlers
- Visitors looking for the old website- leading to a high bounce rate
- Warnings from Google about invalid traffic
Solving this issue can be pretty complex, perhaps overshadowing the potential benefits of redirecting the site in the first place.
To start with, site owners should investigate what expired domains are forwarding to the new site. Publishers can do this by examining the server log files.
Moreover, if there is Google Analytics available for the redirecting URLS, publishers should look to what redirects are being hit and where that traffic is coming from. They may also be able to set up domain-forwarding that filters out bots through the CDN or host.
This will allow them to keep the backlinks in place while protecting against bots.
Getting Hacked
Bots are becoming an increasing part of all web traffic. Certain bots are helpful, such as search engine bots (GoogleBot), monitoring bots, and SEO crawlers. But bad bots are also on the rise, from scrapers that steal content to click bots used to generate impressions and click on display ads. In fact, a study by Imperva showed that in 2019, bad bots accounted for a whopping 24.1% of all web traffic.
The most significant rise in bad bots has been in those that hack credentials and those that stuff credentials.
The high volume DDoS attacks put significant strain on networks and, in some cases, take them down completely.
One of the most straightforward measures site owners can take to avoid having their CMS hacked is to change the default login URL.
For example:
www.mysite.com/admin becomes
www.mysite.com/portallogin12
Leaving your site’s admin login as default almost guarantees the site will be hit by malicious traffic.
Once publishers change the default URL, any bot attempting to hack the CMS will be directed to a 404 instead.
Purchasing Traffic
While purchasing traffic is still popular with website owners looking to increase clicks, it is ultimately a dead-end road.
Purchased traffic usually comes from click farms where specially designed software or human traffic is assigned to inflate clicks and impressions. And while this may seem exciting in the short term, at the end of the day, this is never good for a brand or their revenue.
Instead, publishers should look to increase human traffic through search engine optimization and providing superior content and an exemplary user experience.
This will ensure advertisers get value for their money when it comes to leads and conversions and will be happy to pay the premium that comes with such a site’s reputation.
How to Find and Eliminate Invalid Traffic (IVT) On Your Website?
While invalid traffic can be highly detrimental to both advertisers and publishers alike, Google is unfortunately quite vague when it comes to how to deal with these traffic sources.
Having said that, they are still extremely strict with their program policies, with Google AdSense stating that if they observe high levels of invalid traffic on an account, they may suspend or disable the account to protect advertisers and users. Additionally, if they cannot verify the quality of a publisher’s traffic, they may limit or disable a publisher’s ad serving abilities.
Not only does invalid traffic increase the risk of having your AdSense account suspended, but it also increases the costs of advertisers. Furthermore, large amounts of non-converting traffic can devalue a publisher’s inventory.
To begin to identify and eliminate invalid traffic, publishers should start by filtering out bots. This can be done with the use of Google Analytics. Here’s how:
- Login to your Google Analytics account
- Click on the Admin button
- Navigate to the View tab and click on View Settings
- Scroll down the page and select the Bot Filtering option if it’s unchecked.
- Finally, ensure to Save your settings
It is important to note that while Google Analytics is quite efficient in filtering different types of invalid traffic, it is not infallible and does not guarantee 100% removal of invalid traffic.
Therefore it is essential for publishers also to follow best practice:
- Never click on your own Ads
- Follow guidelines and policies for ad serving and placement
- Have traffic validated by a third-party
With advertisers becoming increasingly concerned about ad fraud, publishers should ensure that they are on top of the threat posed by invalid traffic.
For maximum ad revenue for everyone involved, publishers should constantly be monitoring their traffic sources and using the above methods to filter IVT.
Is your site struggling with invalid traffic, bots, and ad fraud?
If you’re making more than $2,000 in monthly ad revenue, contact us today to learn more about how Publift can help increase your ad revenue and best optimize the ad space available on your website or app.
FAQs
How can a website detect and block invalid traffic?
Detecting and blocking Invalid Traffic by Google Analytics:
Following are the factors that help in detecting Google Analytics:
- Decreased Session Duration
- Increased Number of Pageviews
- Increased Number of Pages per Session
- Visibly Increased or Decreased Bounce Rate
- Decreased Page Load Speed
Google Analytics makes it easy to block invalid traffic. To do so, you can follow the following steps:
- Go to your Google Analytics account, click on the "Admin" button.
- Go to the "View" tab and click on "View Settings".
- Find out the Bot Filtering option and click the checkbox, if it’s unchecked.
- Save the settings.
This way you can filter bot traffic using Google Analytics. Apart from GA, you can follow the below methods as well:
- Manually Block Invalid IP Addresses
- Use Bot Management Solution
- Use reCAPTCHA
- Use a Web Application Firewall (WAF)
- Use WordPress Plugins
- Use IAB Bot lists
How does AdSense detect invalid clicks?
Every click on an ad is examined by Google. Google has sophisticated systems to identify valid clicks and impressions. If an invalid click is identified, Google removes it from reports and payments. There are several methods that Adsense use to detect invalid clicks:
- Visitors with several click ads: It is good to receive one or two clicks from different individuals but when you are getting 10+ ad clicks by a single person then Google starts taking into consideration and monitors your account.
- Many site visitors from a sole IP address: If a single IP address is being used to click on a particular ad, Google Algorithm easily detects such activities.
- Many visits from the same referrer: Google AdSense keeps a proper check of organic visitors coming via Bing, Yahoo, Google, or any other. If there are many visits from the same referrer, Google considers it as spam.
- Visits & clicks from your IP address: If the site traffic and clicks originate from your very own IP address, Adsense do not consider it and mark them as fraudulent clicks.