During Search Engine Strategies conference in Chicago, many questions regarding duplicate content were arose. People are still not completely aware of the concept behind duplicate content. As such, it has different meanings and there is a bit of confusion on this subject.
Substantial blocks of content within or across domains that either absolutely match content on other websites or are almost similar, are considered as duplicate content by Google. Sometimes this duplication is unintentional but sometimes it is intensionally copied to influence Search Engine Rankings or drive more traffic via popular or long-tail queries. Duplicate content affects the beauty of searching content on different websites. Generally users expect unique information on different websites and they really get annoyed when they find the same set of information or considerably same content on the websites. Moreover, webmasters also do not appreciate complex URL.
So, those who were involved in such practices should be careful from now onwards because your website will be supplemented if any duplicate content is found on your site.
While crawling and serving search results, Google index and show pages with exclusive information. This filtering means, for instance, that if your site has articles in "regular" and "printer" versions and neither set is blocked in robots.txt or via a noindex meta tag, we'll choose one version to list. In the rare cases in which we perceive that duplicate content may be shown with intent to manipulate our rankings and deceive our users, we'll also make appropriate adjustments in the indexing and ranking of the sites involved. However, we prefer to focus on filtering rather than ranking adjustments ... so in the vast majority of cases, the worst thing that'll befall webmasters is to see the "less desired" version of a page shown in our index.
Duplicate content can hurt the Search Engine Optimization and To Read More about this article read the Google Webmasters Blog