txt file is then parsed and will instruct the robot regarding which webpages will not be for being crawled. To be a search engine crawler could continue to keep a cached duplicate of the file, it could from time to time crawl web pages a webmaster would not want to crawl. Pages generally prevented from getting crawled consist of login-unique webpag