Thursday, February 02, 2006

Googles Spider Search magics



Dell apparently learned the hard way this week that companies have to be careful to ensure that information they store on the Internet that they want to keep hidden is not automatically added to a search engine index for everyone on the Web to see.
Specifications for future Dell notebooks were accessible via googles's search site before the content was pulled from a Dell file transfer protocol site and from Google's cache.
Google, like the other major search engines, has an automated search engine that sends software robots called "spiders" out to crawl the Web and find sites to add to the index of Web sites it maintains. Because the spiders follow links running from one Web site to others, they pick up sites on their own without Webmasters having to manually submit them to search engines.
Webmasters also can provide the URL, or numerical Web address, for pages they want crawled, and they can submit detailed site maps to Google, according to Google's "information of webmaster pages".
Webmasters who want to keep some or all of their site private from the Googlebot can put a standard document called "robot.txt" at the root of the server that instructs the crawler not to download content. If the removal request is urgent, the Webmaster can submit a request via Google's automatic URL removal system, but must provide an e-mail address and password first.
Content that has been removed can still be viewed through Google's cache, which is a "snapshot" and archive of each page crawled. Webmasters can prevent pages from being cached by inserting specific code on them.

I think this is good information to share.........


No comments: