Avoid WordPress Duplicate Content Problems With Google

The best way to ensure a web page ranks well in Google keyword searches is to make sure it is the only one on the web that includes the content on the page. In this way you avoid several web pages all having a somewhat equal possibility of being judged relevant for the particular keyword search. This increases the chance that this unique page will outrank other quite independent web pages that cover the same topic. That’s the theory and it seems to work out well in practice.

WordPress is a great software for producing blogs but out-of-the-box the WordPress content management system produces a series of pages that all contain the same content. Just see the concerns expressed in this WebmasterWorld thread about WordPress And Google: Avoiding Duplicate Content Issues where several coding suggestions were offered to avoid the problems. More recently, David Bradley has suggested that something called the canonical link element can be the solution to Avoiding Duplicate Content Penalties.

We should quickly add that this is not an inherent weakness of WordPress alone since many other CMSs will suffer from similar problems. It is a well known problem and you can find an excellent article on how to Avoid Duplicate Content on WordPress Websites, which gives the appropriate steps to take. The most important step of all is to have the right robots.txt file.

I wondered how well people were grappling with this duplicate content problem and decided to check out some of the Technorati’s Blogger Central / top 100 blogs. In particular I thought a check of their robots.txt files would give an indication on whether they had tried to solve the problem. Here is what I found for the robots.txt files for the most popular 8 blogs.

  1. The Huffington Post
  2. # All robots will spider the domain
    User-agent: *
    # Disallow directory /backstage/
    User-agent: *
    Disallow: /backstage/
  3. TechCrunch
  4. User-agent: *
    Disallow: /*/feed/
    Disallow: /*/trackback/
  5. Engadget
  6. (empty)
  7. Boing Boing
  8. User-agent: *
    Disallow: /cgi-bin
  9. Mashable!
  10. User-agent: *
    Disallow: /feed
    Disallow: /*/feed/
    Disallow: /*/trackback/

    Disallow: /adcentric
    Disallow: /adinterax
    Disallow: /atlas
    Disallow: /doubleclick
    Disallow: /eyereturn
    Disallow: /eyewonder
    Disallow: /klipmart
    Disallow: /pointroll
    Disallow: /smartadserver
    Disallow: /unicast
    Disallow: /viewpoint

    Disallow: /LiveSearchSiteAuth.xml
    Disallow: /mashableadvertising2.xml
    Disallow: /rpc_relay.html

    Disallow: /browser.html
    Disallow: /canvas.html

    User-agent: Fasterfox
    Disallow: /
  11. Lifehacker
  12. User-Agent: Googlebot
    Disallow: /index.xml$
    Disallow: /excerpts.xml$
    Allow: /sitemap.xml$
    Disallow: /*view=rss$
    Disallow: /*?view=rss$
    Disallow: /*format=rss$
    Disallow: /*?format=rss$
    Disallow: /*?mailto=true
  13. Ars Technica
  14. User-agent: *
    Disallow: /kurt/
    Disallow: /errors/
  15. Stuff White People Like
  16. User-agent: IRLbot
    Crawl-delay: 3600

    User-agent: *
    Disallow: /next/

    # har har
    User-agent: *
    Disallow: /activate/

    User-agent: *
    Disallow: /signup/

    User-agent: *

As you may notice, the most popular blogs seem to have a singular disregard for this issue with minimal robots.txt files. As you come down the list, it would seem that even these top blogs realize the importance of limiting what the search engine robots crawl and index.

The impetus for exploring this issue came after noticing an additional complication that results if you put An Elegant Face On Your WordPress Blog by using Multiple WordPress Loops.

This could have resulted in many extra web pages that humans would likely not see but search engine spiders would certainly crawl. Changes were made in the site architecture to avoid this. To avoid other potential duplicate content problems, the current robots.txt file for this blog appears as follows:

User-agent: *
Disallow: /wp-login.php
Disallow: /wp-admin/
Disallow: /wp-register.php
Disallow: /wp-login.php?action=lostpassword
Disallow: /index.php?paged
Disallow: /?m
Disallow: /test/
Disallow: /feed/
Disallow: /?feed=comments-rss2
Disallow: /?feed=atom
Disallow: /?s=
Disallow: /index.php?s
Disallow: /wp-trackback
Disallow: /xmlrpc
Disallow: /?feed=rss2&p


Getting the robots.txt file correct is one of the easiest ways of increasing the visibility of your blog pages in search engine keyword searches. Leaving two essentially similar web pages means that the two divide up the ‘relevance’ that a single web page would have. That means approaching a 50% reduction in potential keyword ranking. Perhaps the top blogs can ignore such improvements but most of us should not. Check out what the spiders may crawl by doing an evaluation of your website with Xenu Link Sleuth. We should carefully consider our robots.txt files and make sure they are doing an effective job. Is yours?


Andy Beard added a comment that he has concerns about using the robots.txt file as a solution to the WordPress Duplicate Content problem. He explained these in a post some time ago called SEO Linking Gotchas Even The Pros Make. There is much food for thought there and we will follow up in a subsequent post.

Click here to pin any image you liked on this page.

Search the Web for related articles:
Custom Search

23 thoughts on “Avoid WordPress Duplicate Content Problems With Google

  1. Great and timely post! I know about the duplicate content issues in WP and wanted to brush up on the details because I’ve just launched another blog. This is exactly what I was looking for.

    Here’s one other potential duplicate content issue that I didn’t notice mentioned here:

    True you want Google to see your individual blog post page as the only page with the posts content. However, by default WordPress also publishes the full blog post on the home page of your blog. One common way to avoid this is by using the “more” comment tag under the first paragraph of your blog post and then only that first paragraph will appear on the home page.

    I don’t want any of the tag pages, archive pages, or category pages indexed. I do however want these pages followed because they eventually lead to the posts which I have at the bottom of the category silos.

  2. Ugh I HATE duplicate content. I had to get rid of my entire Blog API for Drupal because it kept duplicating my content. I wish I would have read your post. Great post!

  3. This is something I need to check on my site… I’ve heard a lot about the duplicate content issue with WordPress, but I haven’t ever followed up on it. Thanks for the informative post.

    ~ Kristi

  4. You raise an interesting point, Andy. I appreciate the information you provided in our e-mail exchange. As a result I re-examined the blog architecture and made some small changes to it. I slightly revised the post in consequence and added an Update comment. Thanks for your inputs.

  5. Pingback: Belated Thursday Roundup for the Week of 5/31/09 | GoBLogKE - Create your own blog - let's list a free blog here

  6. Pingback: Belated Thursday Roundup for the Week of 5/31/09 | Search Engine Optimization Professionals ( SEO )

  7. Finally someone who can write a good blog ! . This is the kind of information that is useful to those want to increase their SERP’s. I loved your post and will be telling others about it. Subscribing to your RSS feed now. Thanks

  8. Good points even if this post is a little bit old. I’ve been getting used to using WordPress for a few months only but my question is similar to the commenter above.

    What about tags? I see Google indexing not only my main pages but if I have 10 tags per post then that’s another 10 pages, theoretically, that Google is indexing but, it’s obviously duplicate content barring 1 word, the tag itself.

    Any tips?

  9. Both for Tag Archive pages and Category Archive pages, I suggest only showing a list of the post Titles. That avoids all duplicate content issues.

  10. I’m sure by now there will be a plug-in to solve the problem. However, if not then it does seem to get around the problem fine. Good stuff.

  11. I think you are absolutely right – robot.txt is a very simple solution for the duplicate content problem. By the way it is very helpful to check robot.txt of other sites looking for any new items in the file.

  12. Good article. This is an excellent way to boost the optimization of a wordpress website. Unfortunately for those who aren’t technically inclined it is also a bit annoying. I also agree with the first commenter, an xml map is an excellent way to make sure that google won’t crawl any pages that you didn’t mean for it to crawl.

  13. With a website that isnt fully coded correctly it can make an seo’s life a living nightmare and it doesnt matter how many great links you get for the site it wont move in the rankings until you get this problem corrected. An Xml sitemap is a great first step but lik ethe article says, all of the coding should be fixed before attempting off-page seo techniques.

  14. Thanks for sharing the useful information.Another way of avoiding this problem within a site is with the use of a duplicate content checker. There are plenty of these available online and all it requires is you to enter the URL of the site that you wish to be analyzed. Within a matter of minutes, you can see if there are any problems and if so, make the necessary changes.

Comments are closed.


Most Popular Articles from the Archives

Why not sample a few of the other blog posts that visitors have found of interest.