WebCrawler, a search engine, computer software program for locating World Wide Web (WWW) information. WebCrawler was developed by University of Washington graduate student Brian Pinkerton in 1994 but is now maintained by America Online, Inc., a commercial Internet service provider. WebCrawler uses a program called a spider. A spider sometimes called robot, softbot, spiderbot, wanderer, crawler, and fish. A Spider is a computer program that automatically monitors documents. Most Web pages include at least one link (an automatic connection) to another Web page, and some include hundreds of links. A spider takes advantage of this structure by starting at one Web page and working its way out by following every link on a Web page and then following every link provided by the new Web pages. Some spiders save the URL (Uniform Resource Locator), or address, of every Web page they visit. These spiders are used by search engines to build indexes of Web pages that users can access to search for information on a particular topic. Indexing spiders, as they are called, often also store the title and partial or complete text of a Web page so users can do more detailed searches. A WebCrawler uses a program called spider to search the WWW for new documents (called Web pages) and to index all the words in the documents. A person using WebCrawler enters a keyword or phrase. The WebCrawler provides a list of all documents that contain the word(s) or phrase for that particular page. Each title is linked to the document's site on the WWW, so users can go directly from the list to any document on the list and find more useful information about the documents.