Report Scraper Sites to Google. How to Outrank Scraped Contents?

Blog plagiarism! The more famous a blog is, the more the contents get copied by scraper sites through RSS feeds. The reason for scraping: The scraper-blog owner wants to win his slice of online ad revenue without doing any work!

Despite several advances in algorithm over years in improving Google search, Google’s webspam team is still finding it tough to separate wheat from chaff. No matter how hard you researched, worked, and shared your original thoughts on your blog, a scraper site could outrank your contents on Google search if the scraper site’s domain is older than yours and has more links from the Internet pointing to it than yours! Right from inception, at the very instant I press publish button on this blog, the contents are scraped and splashed all over the Internet by various scraper sites, and I am fighting it out. As a blogger can you do something on your part to outrank the scraper sites? Here is how you can easily fight it out.

First of all, how could you trace the scraper sites?

There are four different easy ways for you to get signals that your contents are getting scraped by some other site. They are:

  1. AdSense Allowed Sites:

    Google AdSense’s Allowed Sites feature allows you to specify the sites on which your ad codes will appear. If you have chosen to only allow ads on your sites for your publisher code, any other site that displays your ad code will be reported here. It will be brought to your attention at once you log in to AdSense dashboard.

  2. FeedBurner Uncommon Uses:

    FeedBurner flags you the uncommon uses of your feed under the Analyze tab. Log in to FeedBurner occasionally to check your subscriber stats and find if there is any suspected domain using your feed.

  3. Google Webmaster Tools, Links to Your Site:

    Google Webmaster Tools reports the sites linking to your posts. Once in a while, scroll through that data for any suspicious site linking to you. Check for sites that have a lot of links pointing to yours because all the internal links that you place in your posts should ultimately link back to your site from the scraper site if the scraper-site owner is not a geek.

  4. Google Web Search for a Particular Phrase:

    Intentionally, I insert a few unique words in the RSS feed something similar as seen below.

    Comment is open. Feel free to add your thoughts and participate in the discussion at Raj & Co. Furthermore, embedded videos in the posts may not come up in RSS feeds and email updates. In such case, read the full article, Report Scraper Sites To Google. How To Outrank Scraped Contents?, in a browser.

    Now a thorough search in Google for the sentence “Furthermore, embedded videos in the posts may not come up in RSS feeds and email updates” should easily differentiate the scraper sites.

    It is okay if the scraper sites copy a couple of sentences and link back to the original article. You get a free link to your article. However, if a scraper site is copying full contents of your site, you have to worry a little.

What could you do to remove the scraped contents?

Report scraper sites to Google. How to outrank scraped contents?By this time, you must have identified a whole bunch of scraper blogs siphoning your hard work, thoughts, research; all the original contents on your blog that easily. RSS is a boon in getting more follower crowd for a blog where as it is detrimental to your interests with easy siphoning of your sweat. As long as search engines fail to identify splogs from blogs, you have no choice left than to take the sword yourself in defending your blog against copying and content theft. If a search engine bot finds your article on a splog first before finding it on your blog, it may well stamp the scraped one as the original. Therefore, it is your need to stop content theft, but how do you remove scraped contents on a blog owned by a person unknown to you? Here are a few effective methods that you can try in achieving that goal.

Contact the site owner

Visit the scraper blog. Find if there is any contact option available on the site. If none available, find the information about the owner by performing a who.is search on the web and find out the owner’s email address. Mail him through the email/contact form your dissatisfaction and the actions you are going to take. I would send a cease and desist mail with the wordings as follows:

Hi Webmaster,

You are copying the posts of my blog, https://rajn.co, by using the site’s RSS feeds, unauthorized. Remove all the copied posts immediately and stop copying hereafter.

Failing to do so within a week will force me to file a DMCA complaint against you claiming compensation/damages. Subsequently, a complaint to your host and Google AdSense will also be made. Hope you know the consequences thereafter!

As I mentioned earlier, the prime intention of a scraper is free content and easy money by slapping ads all over. Serving this notice should bring the scraper site to halt as no scraper wants to get banned by Google AdSense!

Contact the host

Mark a copy of the aforesaid mail to the host of the scraper blog or a more formal DMCA mail. No host would like to invite trouble. If at all the site owner is ignorant and not responsive to your notice, a dutiful host should remove the scraped posts and prevent further scraping.

Report the scraper site to Blogspot

One of the advantages of a blog on Blogspot (Blogger) is that the author can hide behind the policies of Google. May be that could be helpful in countries where democratic voices are suppressed. However, scrapers take it to their advantage. So, how do you remove scraped posts on a blog hosted on Blogspot then? Simple! Report copyright infringement to Blogger. I have done that and the response was marvelous, thanks to Blogspot Support Team in this regard. Hence if you do it right, the action will be swift, guaranteed.

If you find the splog on Blogspot is not only copying your contents but a bunch of other blogs too, flag it as spam to Blogspot; however, I have not seen a stern action in this regard although Blogger’s spam policy says that they don’t permit scraping contents from other sites on the web.

Report the scraper site to AdSense

In a similar manner, report copyright infringement to AdSense. So far I hadn’t had a chance to proceed with this method as the former three methods got the scraped posts removed but I assume this should also get the same response as I got from Blogspot support team.

File a DMCA complaint to Google web search

As said above about filing a DMCA complaint with AdSense and Blogger in reporting a scraper site, file a DMCA complaint with Google web search itself. I am still to hear from somebody about the response for this action.

Report the scraper site to Google

In the earlier initiatives by Google tackling webspam, it had come up with an option to report spam in its index. In its relentless effort to make Google better, of late it has come up with an option to report scraper pages to Google. Why not try it out? If there are a lot of pages to be reported, use the tool to extract scraped links for making it fast and easy.

If everything else fails, what else can I do?

By taking down all the scraped contents as said above, there shouldn’t be any competition from scraper sites. However, how do you crack hard nuts if any still remaining? Take them to your advantage by gaining links from such scraper sites to yours. Install RSS Footer or WordPress SEO Plugin and insert in your feed a link to the original article and your site with some unique wordings as I mentioned above. For every scraped post, you get back links from the scraped site. Similarly install Yet Another Related Posts plugin to insert links of related posts into your feed. By doing so as a last resort, Googlebot at least should be able to identify the original article.

4 thoughts on “Report Scraper Sites to Google. How to Outrank Scraped Contents?”

  1. Strange, funny, and sad – you have a new scraper daring enough to start scraping right from this same article. Find your new scraper at this URL: blogsdirectoryhub.com/pages/MT-Herald-Dot-Com

    Oh Google, drop the scraper sites completely from your search index or else it is a menace for innocent bloggers and webmasters.

    • OMG. Crazy guy. Pity me. Let me begin my next fight. Feedmap.net is another scraper who is not buckling. Have to file DMCA against it.

    • Yeah, that is a good suggestion which cannot be denied though I am opposed to it as I don’t want my subscribers to get affected but I am forced to by the damn scrapers. I am doing it immediately.

Comments are closed.