09 January 2011

How To Build A Basic Web Crawler To Pull Information From A Website

"Web Crawlers, sometimes called scrapers, automatically scan the Internet attempting to glean context and meaning of the content they find. The web wouldn’t function without them. Crawlers are the backbone of search engines which, combined with clever algorithms, work out the relevance of your page to a given keyword set.

The Google web crawler will enter your domain and scan every page of your website, extracting page titles, descriptions, keywords, and links – then report back to Google HQ and add the information to their huge database."

Read Article Source>>

Why you should Donate!

The Adaptive Web: Methods and Strategies of Web Personalization (Lecture Notes in Computer Science / Information Systems and Applications, incl. Internet/Web, and HCI)Fantasma Web Runner Remote Control Spider
Post a Comment