Table of contents
Unique content is something essential in the process of creating a website. Since the implementation of the Panda algorithm, search engine robots carefully analyze content and evaluate it for quality, length and originality. How to plan content to avoid duplication? What consequences can a site suffer by duplicating text from another site? And, of course, how does this affect search engine rankings?
When does duplication of content occur?
The phenomenon of duplicate content occurs when the same content appears at different URLs. In the case of external duplication, the same text is on a different domain. However, duplicate content within a single site is very common. Often unknowingly.
Internal duplicate content within a single domain needs to be corrected before it negatively affects a site’s search engine ranking. When is internal duplication most common?
Different variations of one product
Thousands of different products appear in the offer of online stores. Many of them appear in several variants – color or size. By creating separate subpages for each variant, we duplicate the content describing the product. Each sub-page usually differs only in a few parameters in the product description. It is worth considering a solution that will allow you to display all variants within a single URL.
Duplication pagination pages
In online stores with a large assortment, we have to deal with pagination pages. Most often, a category description appears on the first one. Unfortunately, some CMS systems automatically generate the first pagination page as a copy of the base page. The URL then takes the form ‘page.com/category-name-1’.
To avoid duplicate content at different URLs, use a redirect to the category home page or delete the automatically created first pagination page. Also, make sure that the category description is only on the category home page.
It is often the case that a website can be accessed at different URLs, for example,
Make appropriate 301 redirects or establish a canonical page so that all available variants are treated by Google robots as the same page.
Canonical page variation
If we provide a print version of the page, we must remember to use the ‘noindex’ meta tag. This will inform search engine robots that the page should not be indexed, thus avoiding duplication.
An alternative solution is to simply forgo providing a print version and instead use appropriate CSS sheets.
Page language versions
Duplication of content can also be caused by different language versions of the site It often happens that not all sub-pages of the site are translated, resulting in duplicate content. If we decide to have several language versions of the site, we must remember to properly implement the hreflang attribute.
Optimize meta tags
A common mistake is not properly optimizing Title and Description tags. These are often automatically generated without adjusting them to the actual content of the page. It is a good practice to create unique content for the meta tags and saturate them appropriately with keywords thematically related to the page.
The occurrence of duplicate content on pages of different domains is the phenomenon of external duplication. This is a serious error that can cause consequences related to the position of the site in the search engine.
Multiple product descriptions
This situation occurs most often with stores that have the same products in their offerings and, as part of the description, put on the site the information provided directly by the manufacturer. Very often, product descriptions are also duplicated with those posted on popular sales platforms such as Allegro, Ceneo or OLX.
The best solution to eliminate this type of duplication is to create unique, elaborate product descriptions, saturated with keywords as appropriate.
Illegal content copying
Contents on the sites are the work of web developers and are usually copyrighted. Reproducing them without the author’s permission is illegal and, in addition to negatively affecting the site’s standing, may also have legal consequences. If you want to quote someone’s statement or part of an article, you should put the text in the appropriate characters and add information about the author.
Replicating content on several platforms
When we have accounts on various types of platforms and social networks, we usually share information about news appearing on our website. When posting a post promoting a new product or blog article, remember to have unique content on each of your existing social media accounts.
How to detect duplicate content?
Search engine robots are not always able to verify which text is original and which they should consider plagiarism. So it’s worth checking on your own whether the text you’ve written has been used by someone else.
There are several ways to detect duplication. The first is to look at the Google Search Console tool and its Status tab. In addition to errors related to redirects or incorrect indexation, you will also find information about duplicates occurring within the site.
We can also use tools like Screaming Frog, Copyscape or Siteliner, where in the free trial version most of the options are available with some limitations. This will allow us to identify some URLs and determine the percentage of duplicate content. In the paid version, we can afford a thorough check of the site and more extensive analysis for duplication.
For our own initial analysis, we can simply use a search engine. All we need to do is paste a snippet of text found on our site into the search bar. If the displayed results include text that completely or significantly duplicates the fragment we are analyzing, it means that we are dealing with the phenomenon of external duplication.
Duplicate content – impact on SEO
The slogan ‘Content is King’ continues to lead the way in the SEO world. Wanting to get good results and rank high in the search engine, we should take special care of the quantity and quality of content on the site. However, regularity in publishing content and the expertise it contains is not everything.
The key factor in the SEO process is the uniqueness and originality of the texts. Thanks to the algorithms, Google’s robots easily catch plagiarism and duplication, whether it is the result of a deliberate action or an accidental mistake.
The text is not a unique text.
Constant monitoring and analysis for duplicate content is certainly an essential step in the process of website positioning