Scraper site

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

A scraper site is a website that copies content from other websites using web scraping. The purpose of creating such a site can be to collect advertising revenue or to manipulate search engine rankings by linking to other sites to improve their search engine ranking through a private blog network.

Prior to the search engine update Google Panda, a type of scraper site known as an auto blog, were quite common among black hat marketers in a method known as spamdexing.

A search engine is not a scraper site itself; sites such as Yahoo and Google gather content from other websites and index it so that the index can be searched with keywords. Search engines then display snippets of the original site content in response to a user's search.

Made for advertising

Some scraper sites are created to make money by using advertising programs. In such case, they are called Made for AdSense sites or MFA. This derogatory term refers to websites that have no redeeming value except to lure visitors to the website for the sole purpose of clicking on advertisements.[1]

Made for AdSense sites are considered sites that are spamming search engines and diluting the search results by providing surfers with less-than-satisfactory search results. The scraped content is considered redundant by the public to that which would be shown by the search engine under normal circumstances, had no MFA website been found in the listings.

Legality

Scraper sites may violate copyright law. Even taking content from an open content site can be a copyright violation, if done in a way which does not respect the license. For instance, the GNU Free Documentation License (GFDL)[2] and Creative Commons ShareAlike (CC-BY-SA)[3] licenses, used on Wikipedia,[4] require that a republisher inform readers of the license conditions, and give credit to the original author.

Techniques

Lua error in package.lua at line 80: module 'strict' not found. Lua error in package.lua at line 80: module 'strict' not found.

Depending upon the objective of a scraper, the methods in which websites are targeted differ. For example, sites with mass amounts of content such as airlines, consumer electronics, department stores, etc. may be routinely targeted by their competition often to stay abreast of pricing information. Sophisticated scraping activity can be camouflaged by utilizing multiple IP addresses and timing search actions so they don't proceed at robot-like speeds and instead are more human like.

Some scrapers will pull snippets and text from websites that rank high for keywords they have targeted. This way they hope to rank highly in the search engine results pages (SERPs). RSS feeds are vulnerable to scrapers.

Some scraper sites consist of advertisements and paragraphs of words randomly selected from a dictionary. Often a visitor will click on a pay-per-click advertisement because it is the only comprehensible text on the page. Operators of these scraper sites gain financially from these clicks. Advertising networks claim to be constantly working to remove these sites from their programs, although there is an active polemic about this since these networks benefit directly from the clicks generated at this kind of site. From the advertisers' point of view, the networks don't seem to be making enough effort to stop this problem.

Scrapers tend to be associated with link farms and are sometimes perceived as the same thing, when multiple scrapers link to the same target site. A frequent target victim site might be accused of link-farm participation, due to the artificial pattern of incoming links to a victim website, linked from multiple scraper sites.

Domain hijacking

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

Lua error in package.lua at line 80: module 'strict' not found. Some programmers who create scraper sites may purchase a recently expired domain name to hijack it's SEO power. Doing so will allow spammers to utilize the already-established backlinks to the domain name. Some spammers may even try to match the topic of the expired site or copy the existing content from the Internet Archive to maintain the authenticity of the site so the backlinks don't drop. For example, an expired website about a photographer may be re-registered to create a site about photography tips or use the domain name in their private blog network to power their own photography site.[citation needed]

See also

References

  1. Made for AdSense
  2. Lua error in package.lua at line 80: module 'strict' not found.
  3. Lua error in package.lua at line 80: module 'strict' not found.
  4. Lua error in package.lua at line 80: module 'strict' not found.