First we will introduce our readers to the concept of web scraping. Web scraping refers to the technique of fetching information from web pages. Web scraping in many ways is similar indexing with the main difference being that web scraping aims at mining out structured meaningful data for further use whereas indexing refers to fetching and saving the information as it is along with other informational meta data.

There are many techniques used for web scraping. Here we present a list of these techniques:

  1. Manual copy and paste : In this technique a person surfs the targeted webpage(s) and manually copies and pastes the desired data or information. With increasing awareness about data scraping many webmasters have started taking precaution to protect the content on their pages which has made manual copy and paste the only solution that works in all the cases.
  2. Using text grepping with UNIX’s grep command and using regular expression-matching facilities made available by different programming languages.
  3.  HTTP Programming which refers to retrieving the web pages using HTTP requests via socket programming.
  4. Web scraping software : There are many software available now a days that allow users to scrape content from targeted web pages.
  5. HTML parsers
  6. DOM parsing
  7. Vertical aggregation platforms
  8. Semantic annotation recognizing
  9. Computer vision web-page analyzers

Note : The discussion regarding the following methods is out of scope of this article and users are advised to search for detailed explanations on the web and to look at the Wikipedia page for web scraping for further details.

( http://en.wikipedia.org/wiki/Web_scraping )

Though web scraping is a common practice on the world wide web but it is not always advisable to do as it may have serious legal implications. For example many websites have legal policies and terms of use prohibiting scraping their content. There have already been cases where the targeted website owner/firm have acquired court injection ordering the scraping party to stop accessing the site and have also been penalized in terms of both monitory fines and prison terms.

It is because of such unauthorized uses of data that webmasters use many a techniques to prevent others from scraping data from their website. Some of these worth mentioning are blocking IP addresses, displaying information via images instead of text, disabling web API’s, using commercial anti-bot services and disabling bot access.

There are also many productive uses for web scraping and it depends on the intended use of the data and the mutual understanding between all the involved parties. after all web scraping is simply a technique and is neither god or bad by itself. In the end it depends on the users how they employ it.

About 1Solutions- A 360 degree web and internet marketing services company based in New Delhi, India. Our clients include likes of Verizon, Nuance, SBI, Carzonrent etc.  We provide web design, development, internet marketing, branding services to clients worldwide.